Are you doing 'put''s via the socket or are you doing a tsdb import?
Is this one metric or a bunch of metrics?
We're generally bound by the write speed of HBase.
OpenTSDB uses asynchbase, which is highly threaded
Are you doing 'put''s via the socket or are you doing a tsdb import?
Is this one metric or a bunch of metrics? We've been able to easily
push 12k puts/sec per box (or steady state is around 4k per box at the
moment). Imports are faster, partly because it does some less
verification of the data (with respect to things like time ordering),
but also it can batch huge write requests.
We're generally bound by the write speed of HBase. Remember that the
metric ID is part of the row key, so a big import of one metric will
be hitting only one regionserver. To do a real perf test, make sure
you have lots of different metrics and that you have pre-split the
table so that data is going to as many regionservers as possible.
But yes, OpenTSDB uses asynchbase, which is highly threaded, When
doing big imports or when it's doing catchup after some maintenance,
I've seen tsdb happily consume all of its cores doing its work.
--Dave
On Wed, Jun 6, 2012 at 2:29 AM, Sebastien Nahelou
#more /proc/sys/vm/max_map_count65530#more /proc/sys/kernel/threads-max39793#ulimit -acore file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (-i) 19896max locked memory (kbytes, -l) 64max memory size (kbytes, -m) unlimitedopen files (-n) 1024pipe size (512 bytes, -p) 8POSIX message queues (bytes, -q) 819200real-time priority (-r) 0stack size (kbytes, -s) 10240cpu time (seconds, -t) unlimitedmax user processes (-u) unlimitedvirtual memory (kbytes, -v) unlimitedfile locks (-x) unlimited