What are good configuration settings for large map entries

473 views
Skip to first unread message

Lukas Blunschi

unread,
Feb 19, 2013, 1:32:28 PM2/19/13
to haze...@googlegroups.com
Hej,

I'm running several performance tests using Hazelcast 2.5. One of the tests scales on the map entry size (2KB, 4KB, 8KB, 16KB, 32KB, 64KB, 128KB, ...).

As expected, the network traffic increases. Up to 32KB everything behaves as expected, but starting with entry sizes of 64KB and larger, the operation latencies (Avg Get Lat., Avg Put Lat., Avg Remove Lat.) start to grow exponentially.

Even though the network traffic is already at 50 MB/s, it should actually be able to transport up to ~115 MB/s. CPU and memory should also be fine. Therefore, there must be some other bottleneck...

When I look at the threads in the JVM, most of them are waiting for answers from Hazelcast. E.g. at com.hazelcast.util.ResponseQueueFactory$LockBasedResponseQueue.poll(ResponseQueueFactory.java:63).

Now to my actual question: Do you have any suggestions on how to tune Hazelcast configuration parameters for larger map entries? E.g. what thread sizes would you use, what buffer sizes, anything else related to this.

Thanks,
Lukas

Enes Akar

unread,
Feb 20, 2013, 3:01:14 AM2/20/13
to haze...@googlegroups.com
How do you serialize your objects?
As objects get more complex, serialization cost can be the bottleneck.
Especially the standart Java serialization is slow.
We recommend to use com.hazelcast.nio.DataSerializable interface for your objects.


--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hazelcast+...@googlegroups.com.
To post to this group, send email to haze...@googlegroups.com.
Visit this group at http://groups.google.com/group/hazelcast?hl=en-US.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Enes Akar
Hazelcast | Open source in-memory data grid
Mobile: +90.505.394.1668

Lukas Blunschi

unread,
Mar 1, 2013, 7:32:37 AM3/1/13
to haze...@googlegroups.com
Hi and thanks for your reply.

I am using Java Serialization, but unfortunately, I can not easily switch to another serialization method, because it would require to implement this new serialization mechanism on many many classes.

However, switching to com.hazelcast.nio.DataSerializable did help! Even though I'm still using Java Serialization inside read/writeData() to produce byte arrays, I can now compress this data before storing it in Hazelcast. To safe CPU time, I'm using either Snappy or Lz4 which both perform very well.

The degradation I was seeing in my tests before was therefore - even though I did not expect it - mainly due to the network bandwidth. (for the curious ones: I had Jumbo frames enabled and this actually *lowered* network performance...)

Thanks and best,
Lukas

Tim Peierls

unread,
Mar 1, 2013, 8:04:10 AM3/1/13
to haze...@googlegroups.com
On Fri, Mar 1, 2013 at 7:32 AM, Lukas Blunschi <lukas.b...@appway.com> wrote:
I am using Java Serialization, but unfortunately, I can not easily switch to another serialization method, because it would require to implement this new serialization mechanism on many many classes.

I had a similar issue, which I solved by using a SerializableHolder<T> class:


The holder implements DataSerializable, and it knows how use my non-Java serialization machinery on the contained object graph.

This approach only works if you're either willing to put up with types like IMap<MyKey, SerializableHolder<MyValue>> or able to roll a translation layer (perhaps using ForwardingIMap, slightly out-of-date version here).

--tim

Reply all
Reply to author
Forward
0 new messages