20K requests per second and reality

1,818 views
Skip to first unread message

S Ahmed

unread,
Jun 6, 2012, 9:59:53 AM6/6/12
to dropwiz...@googlegroups.com
In reality, can you really achieve 20-30K requests per second on a single server, or do you end up being socket/io bound before then? i.e. in benchmarks you can see 20K requests per second, but in reality your client connections will take longer to complete and sockets will have be kept around and in the end you will end up being IO bound (from io I mean socket connections).

I've never dealt with a high throughput service endpoint in production so was hoping others could chime in.

Tatu Saloranta

unread,
Jun 6, 2012, 12:39:44 PM6/6/12
to dropwiz...@googlegroups.com
I think this does require low latencies for both network and processing.
With local simple 'echo single query parameter service as XML', I was
able to get 10k/second couple of years ago already, on then modern
single-core machine. So I have no trouble believing that such
solid-state services (i.e. no DB lookups, and at most simple SSD
reads) could reach 20k-30k request rates on multi-core machines.

But I would think that anything with higher latency (calls to other
servers, DB, longer reads, access across WAN) would probably be less
likely to get such rates.

FWIW, service I am currently building is doing 3k/second without
breaking sweat and is throttled by client set up (for now). It does do
file and/or local-bdb access, but probably would cap out somewhere
slightly below 10k.

Not sure if above helps, but thought this is a good discussion to have. :-)

-+ Tatu +-

Coda Hale

unread,
Jun 6, 2012, 12:50:50 PM6/6/12
to dropwiz...@googlegroups.com
The "hello world" example can do +20K req/sec on modern hardware with
99th percentile latencies in the single-digit milliseconds. This
includes request processing and JSON serialization (to Tatu's enduring
credit).

That said, this represents an idea case — no business logic, no
external dependencies, no complicated object graphs, no validation,
little garbage collection, etc. — and is unlikely to be a good data
point for estimating the performance of arbitrary Dropwizard services.

On Wed, Jun 6, 2012 at 6:59 AM, S Ahmed <sahme...@gmail.com> wrote:
--
Coda Hale
http://codahale.com

Tatu Saloranta

unread,
Jun 6, 2012, 1:19:22 PM6/6/12
to dropwiz...@googlegroups.com
On Wed, Jun 6, 2012 at 9:50 AM, Coda Hale <coda...@gmail.com> wrote:
> The "hello world" example can do +20K req/sec on modern hardware with
> 99th percentile latencies in the single-digit milliseconds. This
> includes request processing and JSON serialization (to Tatu's enduring
> credit).

:-)

> That said, this represents an idea case — no business logic, no
> external dependencies, no complicated object graphs, no validation,
> little garbage collection, etc. — and is unlikely to be a good data
> point for estimating the performance of arbitrary Dropwizard services.

I know that this is a good example of gratuituous mini-benchmark, but
perhaps it'd be fun to stage a 100k/sec demo on a 4- or 8-core
machine... doing, I don't know, UUID generation or something.
There's always some amount of bragging rights in adding one more zero
either in front (TPxx) or back (TPS)!

-+ Tatu +-

Alexandru Nedelcu

unread,
Jun 6, 2012, 5:24:56 PM6/6/12
to dropwiz...@googlegroups.com
On Wednesday, June 6, 2012 8:19:22 PM UTC+3, Tatu Saloranta wrote:
I know that this is a good example of gratuituous mini-benchmark, but
perhaps it'd be fun to stage a 100k/sec demo on a 4- or 8-core
machine... doing, I don't know, UUID generation or something.
There's always some amount of bragging rights in adding one more zero
either in front (TPxx) or back (TPS)!

-+ Tatu +-


I've got a small app that is hosted on top of Heroku. Its purpose is to receive dictionaries of values and log those to the DB, then once every hour it rotates those logs to Amazon S3 ... so it's an application-specific logger for stuff.

When the request comes in, I deserialize the request into a HashMap and push it into a non-blocking queue. That's all the request does.
Then every 10 seconds a thread comes in, pulls stuff from the queue and executes a batch insert into the database.

This little app can easily handle 5000 requests per second on my laptop and I work on a pretty shitty laptop. On Heroku with a single node it yields something like 3000 reqs/sec (tested with the client running on an external EC2 machine, so slight network latency was included) and Heroku nodes are modest at best. On top of a good server it definitely can do more than 10K.
Reply all
Reply to author
Forward
0 new messages