On Thu, Feb 26, 2015 at 1:33 AM, Ludovic Gasc <gml...@gmail.com> wrote:
> To avoid TIME_WAIT, I used "sudo sysctl -w tcp_tw_recycle=1" command.
It avoid TIME_WAIT problem.
But difference between keep-alive and non keep-alive is very huge
although tcp_tw_recycle=1.
> About Meinheld, I've tested a little bit in the past. From my little tests,
> yes it improves performance, but not at the same level as aiohttp+API-Hour
> version.
> Moreover, to my knowledge, almost nobody use that on production, contrary to
> Gunicorn. The goal was to compare the standard production setup, as
> explained in Django and Flask documentation, with aiohttp.web+API-Hour.
I admit that meinheld is not used well.
But I think you should use some engine supporting keep-alive to
compare with aiohttp.
How about using nginx for both of aiohttp and sync server?
wrk -(HTTP on TCP)-> nginx -(uwsgi on unix socket w/o keep-alive)-> uWSGI
wrk -(HTTP on TCP)-> nginx -(HTTP on unix socket w/ keep-alive)->
Gunicorn (Tornado worker)
wrk -(HTTP on TCP)-> nginx -(HTTP on unix socket w/ keep-alive)-> aiohttp
Or how about don't use keep-alive on aiohttp?
>
> To be honest, for me, Meinheld uses a little bit black magic to try to
> transform sync code to async, I don't recommend that on production for a
> complex Web application.
Meinheld's async feature is based on greenlet (like gevent).
But you can use meinheld without using async API.
It can be high performance sync server supporting keep-alive.
--
INADA Naoki <songof...@gmail.com>
On Wed, 25 Feb 2015 23:44:33 +0100
Ludovic Gasc <gml...@gmail.com> wrote:
>
> I've disabled keep_alive in api_hour, I quickly tested on agents list
> webservices via localhost, I've 3334.52 req/s instead of 4179 req/s, 0.233
> latency average instead of 0.098 and 884 errors instead of 0 errors.
> It isn't a big change compare to others Web frameworks values, but it's a
> change.
IMO, the fact that you get so many errors indicates that something is
probably wrong in your benchmark setup.
It is difficult to believe that
Flask and Django would believe so badly in such a simple (almost
simplistic) workload.
Regards
Antoine.
On 26 Feb 2015 12:01, "INADA Naoki" <songof...@gmail.com> wrote:
>
> > What's the difference between my benchmark and a server that receives a lot
> > of requests ?
> > For you, this use case doesn't happen on production ? Or you maybe you have
> > a tip to avoid that ?
>
> First, I don't use Gunicorn's default (sync) worker for receiving
> request from client directly.
> It should be used behind nginx or similar buffering reverse proxy.
>
> Second, I don't use Gunicorn's default worker for high load.
> Nginx and uWSGI via unix domain socket is much faster than gunicorn's
> sync worker.
I'll try to test as much as possible scenarios this weekend, at least with nginx.
[...] To my knowledge, in my benchmark, I don't use threads, only processes.