Number of entries | Chronicle* Throughput | Chronicle RSS | HashMap* Throughput | HashMap Worst GC pause | HashMap RSS |
---|---|---|---|---|---|
10 million | 30 Mupd/s | ½ GB | 155 Mupd/s | 2.5 secs | 9 GB |
50 million | 31 Mupd/s | 3⅓ GB | 120 Mupd/s | 6.9 secs | 28 GB |
250 million | 30 Mupd/s | 14 GB | 114 Mupd/s | 17.3 secs | 76 GB |
1000 million | 24 Mupd/s | 57 GB | OOME | 43 secs | NA |
2500 million | 23 Mupd/s | 126 GB | Did not test | NA | NA |
You are right that the mileage can vary *a lot* based on the size of the entry and how you do the serialization.
We offer a number of modes of use and we benchmark the fastest option as this is not only a bigger number but the situation where most of the time is spent in the Chronicle Map code rather than the serialization / deserialization. Ie it is the map we are benchmarking.
The most compact option is to use an LZW compressed Externalizable or our BytesMarshallable serialized object. Depending on the size and structure of the object you could get a 1/18 th the size with this option but you can get a range of sizes between 1/100 to worse than not compressing. Eg the content of your data matters when using compression.
As LZW is expensive we support Snappy and no compression as well. Snappy compression is much faster but uses more space than LZW.
For one use case we examined large String values where using Chronicle Map alone halved the size of the String mostly due to UTF-8 encoding, Snappy was 1/10 th the size and LZW was better than 1/20 th the size of ConcurrentHashMap storing the same Strings. If you serialize an object you get different results as Snappy is optimised for compressing text fast. AFAIK.
Finally the fastest option is avoid serialization entirely and use a direct reference into off heap memory. This allows you access into native memory like a reference to a struct in C++ for a Java Bean (which we generate the code from an interface of getters and setters you provide) or you can hand craft the code to do this from our generated code.
As the Map is concurrent, we use multiple threads in our benchmarks. The benchmark was run on a machine with 16 cores and hyperthreading so we had 32 threads.
The table is a bit old and we get a little over 1 million updates/sec/thread when all the cpus are busy. We get about double that with just one thread running.
It is worth remembering that ChronicleMap supports persistence, ultra fast restart, data sizes larger than main memory with notional on heap foot print, and LAN/WAN replication. (The WAN replication allows you to control the amount of network bandwidth you utilise)
If you use chronicle for more than just a replacement for an on heap data structure it can be a more compelling option.
Regards,
Peter.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
If you needed LRU, you would need to create a Map facade which kept an LRU of the keys on heap while the entries were off heap. Maintaining an strict LRU is not simple to combine with concurrent access.
In the future, we might support a mostly LRU which is not strict but we haven't had any clients ask for it.
Regards,
Peter.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Chronicle" group.
To unsubscribe from this group and stop receiving emails from it, send an email to java-chronicl...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-symp...@googlegroups.com.
--