Hi Doug,
Thank you for your answer and sorry taking so much time to respond.
I re-ran the test with the corrections you suggested, and everything
is working correctly.
$ ./perf_eval write
Evaluating random writes performance
Random writes: 300000 1000-byte rows in 21.391 seconds, 14024.9 rows
per second
$ ./perf_eval read
Evaluating random reads performance
Random reads: 300000 1000-byte rows in 250.601 seconds, 1197.1 rows
per second
After disabling compression as you described, I got the following
results:
$ ./perf_eval write
Evaluating random writes performance
Random writes: 300000 1000-byte rows in 25.668 seconds, 11687.5 rows
per second
$ ./perf_eval read
Evaluating random reads performance
Random reads: 300000 1000-byte rows in 248.167 seconds, 1208.9 rows
per second
When I removed the compression settings again (supposedly turning
compression back on) and re-run the write test again, I got
$ ./perf_eval write
Evaluating random writes performance
Random writes: 300000 1000-byte rows in 17.947 seconds, 16715.5 rows
per second
I guess the random write test runs faster with compression than
without it because the bottleneck is in log writing speed and
compression allows to write less data.
Also, is there currently a Java API for Hypertable or plans of
creating one? I could not find anything relevant in src/java in the
source tree. It would be good to have a Java API that is compatible
with HBase so that it can be easily replaced with Hypertable and vice
versa.
Thanks,
Mikhail Bautin