finally got around to using async tomcat and did a comparison with
kws. as promised, here are some numbers ...
the bottleneck in my backend processing is a single-threaded-portion
and disk. i haven't tried the nonblocking network io, but my payloads
are small, so hoping that that's not a big factor. for testing i ended
up using apache bench. i added a command to my webapp that will
perform a "random" read-only action
linux limits the number of open file descriptors, which limits the
concurrent connections that you can test. i work around this for
testing by running kws, tomcat and ab in a bash instance that has been
run with sudo, eg
sudo bash -c "ulimit -n 70000; su username"
ab -r -T application/x-www-form-urlencoded -p doc/POST-read.txt -c
20000 -n 100000 localhost:8080/diary
ab is limited to 20000 concurrent connections. i have thin shims for
kws and tomcat that feed the request to a common processing engine and
i've mimicked the kilim portions of kws in my tomcat shim (standard
scheduler with 4 threads). i believe it's apples to apples, but of
course, the devil is in the details so buyer beware
tomcat 8.0.8, with <async-supported>, 1000000 total requests
throughput 95% latency
concurrency tomcat kws tomcat kws
(1000s) (req/s) (req/s) (ms) (ms)
01 8942.88 13813.25 1016 52
03 9952.78 14465.96 1051 1074
04 10261.91 14280.45 1233 1150
06 9836.01 14641.07 3028 1234
07 8849.40 14716.42 3044 1291
09 10442.45 14612.94 3074 1412
10 9998.81 14477.38 3103 1481
11 8940.53 14599.88 3242 1530
12 8608.96 14407.21 3282 1626
13 8642.78 14453.07 3557 1689
14 8614.16 14594.65 4304 1731
15 8707.69 14489.58 5079 1831
17 8577.84 14361.63 7051 1917
18 8548.55 14595.37 7060 2002
19 8467.92 14616.16 7081 2108
20 8294.48 14671.26 7121 2151
at high concurrency levels and when the total number of requests is
high, tomcat (or ab) appears to drop a few requests:
apr_socket_recv: Connection reset by peer (104)
my understanding is that these are more a symptom of how ab manages
the requests than a real problem, but i haven't investigated. with
kws, these drops only happens before the cache gets warmed up, ie when
disk is severely limiting
throughput doesn't seem sensitive to concurrency on either server from
1000 to 20000, with kws being 50% faster. as concurrency is increased,
kilim wins on latency. i haven't tried to tune tomcat, so perhaps
these numbers could be improved (eg by limiting the number of tomcat
threads or using a read/write listener). but at first glance, it looks
like tomcat async is decent when faced with 20k concurrent
connections, making it sufficient to use as a front end for a
kilim-based backend
On Fri, Aug 30, 2013 at 1:49 AM, seth/nqzero <
bl...@nqzero.com> wrote:
> thanks for the reply sriram
>
>> The MOB tester is quite bitrotted, and not documented at all, so it'll
>> take a fair bit of effort to package up. There are a number of http testers
>> around anyway
>
>
> the blog posts comparing tornado and node.js all seem to use multiple
> httperf processes - a kilim based client might be lighter and allow for
> better simulated loads (and maybe comet ?). so if the code is unencumbered
> and it's just a matter of polishing the rot i'd be willing to try
>
>> As for the KWS, all the essential bits are already in kilim.http. If all
>> you're using Tomcat for is request and response handling (and not say, https
>> or cookies), then kilim http is more than sufficient and really really fast
>
>
> unfortunately i do need cookies to store a session id, but maybe i can parse
> that myself. i've mocked something up, but need to remove some vestigial
> framework stuff. i'll report back if i get some tomcat vs kws numbers
>
>> That said, there's no reason to fear threads
>
>
> my understanding is that a java thread uses about 1M for stack. i'm
> targeting lowendbox/vps setups so hoping to keep that footprint down. since
> i'm already using kilim for the backend, KWS seems like the perfect solution
>