[erlang-questions] Erlang, Yaws, and the deadly Tornado

24 views
Skip to first unread message

Lev Walkin

unread,
Sep 19, 2009, 3:54:13 AM9/19/09
to Erlang Questions

======================
Since Facebook acquisition of FriendFeed, a bunch of technologies were
released to the wild, including, most notably, a Tornado web server
written in Python. The Tornado is touted as a «a scalable, non-blocking
web server and web framework». See Wikipedia article
http://en.wikipedia.org/wiki/Tornado_HTTP_Server on some details on the
performance of that server, as well as some comparison with other web
servers.

The numbers looked interesting, so I decided to benchmark Tornado myself
to check out how it fares against some Erlang tools.
======================

Since the data is highly graphical, I can only give the link to the rest
of the benchmark.

http://lionet.livejournal.com/42016.html

--
vlm

________________________________________________________________
erlang-questions mailing list. See http://www.erlang.org/faq.html
erlang-questions (at) erlang.org

Roberto Ostinelli

unread,
Sep 19, 2009, 6:38:12 AM9/19/09
to Lev Walkin, Erlang Questions
glad to see that yucan and misultin seem to outperform all ;)

would be interesting to see also how these perform re cpu, memory
usage, ... but i do get these tests need loads of time.

thank you lev for sharing the results of these tests with the rest of
us,

cheers,

r.

Evans, Matthew

unread,
Sep 19, 2009, 11:55:06 AM9/19/09
to Lev Walkin, Erlang Questions
Thanks for sharing.

I can't see in your tests what version of Erlang you are using? If R12, would R13 be a better performer?

________________________________________
From: erlang-q...@erlang.org [erlang-q...@erlang.org] On Behalf Of Lev Walkin [v...@lionet.info]
Sent: Saturday, September 19, 2009 3:54 AM
To: Erlang Questions
Subject: [erlang-questions] Erlang, Yaws, and the deadly Tornado

Lev Walkin

unread,
Sep 19, 2009, 1:30:07 PM9/19/09
to Evans, Matthew, Erlang Questions
Evans, Matthew wrote:
> Thanks for sharing.
>
> I can't see in your tests what version of Erlang you are using? If R12, would R13 be a better performer?

R13B01

Valentin Micic

unread,
Sep 19, 2009, 3:37:58 PM9/19/09
to Lev Walkin, Evans, Matthew, Erlang Questions
Do you think that HiPE may make any difference here?
Yes, Yucan!

V.

Lev Walkin

unread,
Sep 19, 2009, 5:13:53 PM9/19/09
to Valentin Micic, Evans, Matthew, Erlang Questions
Valentin Micic wrote:
> Do you think that HiPE may make any difference here?

I do not think so, since it uses extra-modular calls, such
as gen_tcp and lists. But let's try it anyway...


SMP-128-nohipe

7000 rps,7000,0,7000,0,7000,7000,7000,0,0,0
8000 rps,7982,0,8000,0,7998,7954,7994,0,0,0
9000 rps,8755,(533),9000,5,8785,8736,8744,5,6,5
10000 rps,8969,(1500),10000,15,9012,8953,8942,14,13,18

SMP-128-HIPE

7000 rps,6706,0,7000,0,6119,7000,7000,0,0,0
8000 rps,8001,0,8000,0,8002,8000,8002,0,0,0
9000 rps,8937,(33),9000,0,8923,8892,8998,0,1,0
10000 rps,9159,(1166),10000,11,9234,8661,9582,15,10,10

As you see, HIPE scales marginally better at 0.3% errors @ 9kRPS,
whereas nohipe has 5% errors @9kRPS.

Overall, this picture tells that it is not worth it. Those intra-modular
calls between HIPE and non-hipe code can kill the whole idea. Also, HIPE
assembled code has proven unstable under real workload in my project,
which is far more important factor.

Lev Walkin

unread,
Sep 19, 2009, 8:01:30 PM9/19/09
to Valentin Micic, Evans, Matthew, Erlang Questions
Lev Walkin wrote:
> Valentin Micic wrote:
>> Do you think that HiPE may make any difference here?
>
> I do not think so, since it uses extra-modular calls, such
> as gen_tcp and lists. But let's try it anyway...

Kostis Sagonas has enarmed me with a hipe:c(Module) hint with
which you can recompile existing (standard) modules into
native ones.

Therefore I redid the hipe benchmark. Here's what I did:

hipe:c(lists).
hipe:c(proplists).
hipe:c(gen_tcp).
hipe:c(inet).

% Test:
18> code:is_module_native(gen_tcp).
true
19>

and, of course, I compiled Yucan as native, using

c(yucan, [native]).


Here's an updated stats:

Rate,Received reply rate,Normalized error rate (1/100%),"Generated
request rate (also, expected reply rate)",Error rate,Attempt 1, Attempt
2, Attempt 3, Error 1, Error 2, Error 3
1000 rps,1000,0,1000,0,1000,1000,1000,0,0,0
2000 rps,1999,0,2000,0,1999,2000,2000,0,0,0
3000 rps,2999,0,3000,0,2999,2999,2999,0,0,0
4000 rps,3997,0,4000,0,3997,3997,3997,0,0,0
5000 rps,4997,0,5000,0,4998,4998,4997,0,0,0
6000 rps,5999,0,6000,0,5999,5999,5999,0,0,0
7000 rps,6999,0,7000,0,7000,6999,7000,0,0,0
8000 rps,7998,0,8000,0,7996,7996,8003,0,0,0
9000 rps,8422,66,9000,0,8972,8988,7308,0,1,1
10000 rps,8523,900,10000,9,7773,8234,9563,9,7,11

Compared to the earlier ones:

> SMP-128-nohipe
>
> 7000 rps,7000,0,7000,0,7000,7000,7000,0,0,0
> 8000 rps,7982,0,8000,0,7998,7954,7994,0,0,0
> 9000 rps,8755,(533),9000,5,8785,8736,8744,5,6,5
> 10000 rps,8969,(1500),10000,15,9012,8953,8942,14,13,18
>
> SMP-128-HIPE
>
> 7000 rps,6706,0,7000,0,6119,7000,7000,0,0,0
> 8000 rps,8001,0,8000,0,8002,8000,8002,0,0,0
> 9000 rps,8937,(33),9000,0,8923,8892,8998,0,1,0
> 10000 rps,9159,(1166),10000,11,9234,8661,9582,15,10,10

Here's a summary:

8000 RPS err % 9k RPS err% 10kRPS err%
Non-HIPE 0% 5% 15%
HiPE(yucan) 0% 0.3% 11%
HiPE([...]) 0% 0.6% 9%

Interpretation: contrary to my earlier assumption, compiling
the yucan.erl module using HiPE was getting me the most benefit.
I now attribute it to using {packet, http} filter, which is
already implemented in C as part of Erlang VM. No significant
amount of extra-modular calls are employed in Yucan, thus
compiling the rest of the system modules (gen_tcp, gen_server,
inet) into HiPE helped too much.

Overall, the effect from HiPE on the numer of error-free
connections per second seems to be about 10%, which,
in my particular case, does not justify any additional
deployment complexities.

Reply all
Reply to author
Forward
0 new messages