The numbers looked interesting, so I decided to benchmark Tornado myself
to check out how it fares against some Erlang tools.
======================
Since the data is highly graphical, I can only give the link to the rest
of the benchmark.
http://lionet.livejournal.com/42016.html
--
vlm
________________________________________________________________
erlang-questions mailing list. See http://www.erlang.org/faq.html
erlang-questions (at) erlang.org
would be interesting to see also how these perform re cpu, memory
usage, ... but i do get these tests need loads of time.
thank you lev for sharing the results of these tests with the rest of
us,
cheers,
r.
I can't see in your tests what version of Erlang you are using? If R12, would R13 be a better performer?
________________________________________
From: erlang-q...@erlang.org [erlang-q...@erlang.org] On Behalf Of Lev Walkin [v...@lionet.info]
Sent: Saturday, September 19, 2009 3:54 AM
To: Erlang Questions
Subject: [erlang-questions] Erlang, Yaws, and the deadly Tornado
R13B01
V.
I do not think so, since it uses extra-modular calls, such
as gen_tcp and lists. But let's try it anyway...
SMP-128-nohipe
7000 rps,7000,0,7000,0,7000,7000,7000,0,0,0
8000 rps,7982,0,8000,0,7998,7954,7994,0,0,0
9000 rps,8755,(533),9000,5,8785,8736,8744,5,6,5
10000 rps,8969,(1500),10000,15,9012,8953,8942,14,13,18
SMP-128-HIPE
7000 rps,6706,0,7000,0,6119,7000,7000,0,0,0
8000 rps,8001,0,8000,0,8002,8000,8002,0,0,0
9000 rps,8937,(33),9000,0,8923,8892,8998,0,1,0
10000 rps,9159,(1166),10000,11,9234,8661,9582,15,10,10
As you see, HIPE scales marginally better at 0.3% errors @ 9kRPS,
whereas nohipe has 5% errors @9kRPS.
Overall, this picture tells that it is not worth it. Those intra-modular
calls between HIPE and non-hipe code can kill the whole idea. Also, HIPE
assembled code has proven unstable under real workload in my project,
which is far more important factor.
Kostis Sagonas has enarmed me with a hipe:c(Module) hint with
which you can recompile existing (standard) modules into
native ones.
Therefore I redid the hipe benchmark. Here's what I did:
hipe:c(lists).
hipe:c(proplists).
hipe:c(gen_tcp).
hipe:c(inet).
% Test:
18> code:is_module_native(gen_tcp).
true
19>
and, of course, I compiled Yucan as native, using
c(yucan, [native]).
Here's an updated stats:
Rate,Received reply rate,Normalized error rate (1/100%),"Generated
request rate (also, expected reply rate)",Error rate,Attempt 1, Attempt
2, Attempt 3, Error 1, Error 2, Error 3
1000 rps,1000,0,1000,0,1000,1000,1000,0,0,0
2000 rps,1999,0,2000,0,1999,2000,2000,0,0,0
3000 rps,2999,0,3000,0,2999,2999,2999,0,0,0
4000 rps,3997,0,4000,0,3997,3997,3997,0,0,0
5000 rps,4997,0,5000,0,4998,4998,4997,0,0,0
6000 rps,5999,0,6000,0,5999,5999,5999,0,0,0
7000 rps,6999,0,7000,0,7000,6999,7000,0,0,0
8000 rps,7998,0,8000,0,7996,7996,8003,0,0,0
9000 rps,8422,66,9000,0,8972,8988,7308,0,1,1
10000 rps,8523,900,10000,9,7773,8234,9563,9,7,11
Compared to the earlier ones:
> SMP-128-nohipe
>
> 7000 rps,7000,0,7000,0,7000,7000,7000,0,0,0
> 8000 rps,7982,0,8000,0,7998,7954,7994,0,0,0
> 9000 rps,8755,(533),9000,5,8785,8736,8744,5,6,5
> 10000 rps,8969,(1500),10000,15,9012,8953,8942,14,13,18
>
> SMP-128-HIPE
>
> 7000 rps,6706,0,7000,0,6119,7000,7000,0,0,0
> 8000 rps,8001,0,8000,0,8002,8000,8002,0,0,0
> 9000 rps,8937,(33),9000,0,8923,8892,8998,0,1,0
> 10000 rps,9159,(1166),10000,11,9234,8661,9582,15,10,10
Here's a summary:
8000 RPS err % 9k RPS err% 10kRPS err%
Non-HIPE 0% 5% 15%
HiPE(yucan) 0% 0.3% 11%
HiPE([...]) 0% 0.6% 9%
Interpretation: contrary to my earlier assumption, compiling
the yucan.erl module using HiPE was getting me the most benefit.
I now attribute it to using {packet, http} filter, which is
already implemented in C as part of Erlang VM. No significant
amount of extra-modular calls are employed in Yucan, thus
compiling the rest of the system modules (gen_tcp, gen_server,
inet) into HiPE helped too much.
Overall, the effect from HiPE on the numer of error-free
connections per second seems to be about 10%, which,
in my particular case, does not justify any additional
deployment complexities.