How to get exception detected in C++ client, rather than logged as INFO/WARNINGs

92 views
Skip to first unread message

Abhishek Yadav

unread,
Apr 19, 2021, 6:07:46 AM4/19/21
to Hazelcast
Hi,

I am using c++ client 4.0.0 on IMDG Enterprise server 4.1.1.
In my program using client APIs put_all() to write into cluser of 3 nodes.
In my program i can see INFO/WARNING logged for LifeCycle envent, Heartbeat timeout failure etc. However i want to catch these as exception in my program but don't find a way to do that. Can you please share some example.

Thanks!
Abhishek

Sharath Sahadevan

unread,
Apr 22, 2021, 11:29:04 AM4/22/21
to Hazelcast
Hi Abhishek,

   One option is to configure listeners for available events in your client. You can find more information here and samples here.
Hope that helps.

İhsan Demir

unread,
Apr 22, 2021, 12:36:44 PM4/22/21
to haze...@googlegroups.com
Hello,

`my_map->put_all(entries).get()` will return an exception if the call fails due to heartbeat failure. But it may have also recovered. Hence, if you did not get exception it may have recovered. When heartbeat fails the connection will be closed and a new connection will be opened to the member. 

You can add a lifecycle_listener(see https://github.com/hazelcast/hazelcast-cpp-client/blob/master/Reference_Manual.md#7513-listening-for-lifecycle-events) to follow connected state of the client to the cluster.

Does this help solve your problem? Maybe you can share logs of the client, and we can have a better understanding of what you want to do and what is the problem. Why do you want to catch heartbeat failures, it is kind of internal. But as I mentioned initially, the `my_map->put_all(entries).get()` call would catch if any exception occurred.

Best Regards,

This message contains confidential information and is intended only for the individuals named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. If verification is required, please request a hard-copy version. -Hazelcast

--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hazelcast+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hazelcast/7b5c2251-1ee2-4b79-a144-69bd34b49234n%40googlegroups.com.


--
Ihsan Demir
Software Engineer
   hazelcast®
 
 
Mahir Iz Cad. No:35, Altunizade, Istanbul
+90 (543) 651-2477 | hazelcast.com


This message contains confidential information and is intended only for the individuals named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. If verification is required, please request a hard-copy version. -Hazelcast

Abhishek Yadav

unread,
May 1, 2021, 1:16:09 PM5/1/21
to Hazelcast
Hi,
Sorry for late reply. My actual problem of heart beat failure was because using pipeline in multithreaded env. Now i have put a mutex to allow only one pipeline writing into one map at a time.

Post this I am facing other challenge and want a suggestion.

Hazelcast Server: Java 4.1.1 Enterprise IMDG
Hazelcast Client: C++ 4.0.0
CLUSTER: 3 Nodes with below configuration
CPU 16
RAM 64 GB
MIN HEAP 32 GB 
MAX HEAP 32 GB

I am writing ~200 Million records into single map by my client program running over 3 hrs. But getting "GC OOM" after writing about 100M+, post that nodes are crashing.

Want to understand, isn't load is being equally devided amongst all 3 nodes. What setting can do that?

What should be my hardware configuration to avoid such crash for this much data.

Being implementing in a bank's network we can't copy paste logs, configuration etc.

Thanks!
Abhishek

ihsan demir

unread,
May 4, 2021, 7:58:00 AM5/4/21
to Hazelcast
Hello,

pipeline is not thread safe itself, when you do pipelineing::add call for example, you need to protect it. Having said this, all other hazelcast calls such as imap::put etc. are thread safe. You can have one pipeline have `future`s added for multiple maps which should work but you should add them to the pipeline in a thread safe way. Hence, I do not know your logic but if for each map you have a different thread, yes, then your choice may be good for you.

As for server side GC OOM problem: Yes, by default the Hazelcast client distributes the data over the 3 servers based on the key value of the entry. As long as keys are different they will probably go to different servers (we call this partitioning and it is calculated in terms of the hash of key binary data). I suggest that you check the memory sizes for each server during the process, and observe that they all grow as you keep adding more data. You need to configure the server JVM parameters to allow more heap usage, did you do it? What are your server start options? 

Guido Medina

unread,
May 4, 2021, 8:20:00 AM5/4/21
to haze...@googlegroups.com
Also, what serialization are you using? it would be worth using something like Kryo for example, make sure your objects are as compressed as possible,
you should probably test some different serializations and see what size one of your objects gets, and use the one that fits best accordingly.

ihsan demir

unread,
May 4, 2021, 10:26:01 AM5/4/21
to Hazelcast
What is the size of each entry? Let's calculate the needed memory for your 100M+ entries.

Abhishek Yadav

unread,
May 4, 2021, 1:13:32 PM5/4/21
to Hazelcast
I am writing below into Imap-
Key:
int64_t var1; 8byte
int64_t var2; 8byte
int32_t var3; 4byte
char var4; 1byte

Value:
double var5; 8byte
char var6; 1byte
int32_t var7; 4byte

Total 34bytes x 200M =6,800,000,000byte(6.485GB)

I have not done anything specific for compression, will try kyro (any c++ example would help). Will check on server parameters and reply.
Thanks!

İhsan Demir

unread,
May 4, 2021, 10:36:13 PM5/4/21
to haze...@googlegroups.com
What is the serialization you used for the entries? There may be a header payload depending on which serialization you chose. In addition, don't forget that the default backup count for the server is 1 which means that you will have to multiply the total by 2 (one for the partitions that it holds + the backup of the partitions that is hosted by another server)

Did you also monitor the number of entries and usage in the cluster using the management center(https://hazelcast.com/product-features/management-center/  and https://docs.hazelcast.com/management-center/4.2021.04/)? it usually provides a very helpful insight into what is going on.

Abhishek Yadav

unread,
May 7, 2021, 10:01:14 AM5/7/21
to Hazelcast
Yes using management center for monitoring.

Right now struggling with registering my serialisation class programmatically on server side.

Getting "No DataSeriliazerFactory registered for namespace: X. Error code:21.." upon doing predicate search.

Can you expand your example to put both key and value as user defined structure and do predicate search after putting into map.

ihsan demir

unread,
May 10, 2021, 5:58:42 AM5/10/21
to Hazelcast
At the server side, you have to have a corresponding java class in server's classpath which implements the IdentifiedDataSerializable as in this example (do not forget to put the factory and you can put the configuration at the server side xml for configuration). You need to put this into a jar and add it to your server class path.

Here is an example query (see identified-data-serializable example):
```
auto result = map->entry_set<std::string, Person>(
hazelcast::client::query::greater_less_predicate(hz, "age", 40, true, true)).get();
```

Does this help?

Abhishek Yadav

unread,
May 11, 2021, 2:04:18 AM5/11/21
to Hazelcast
Yes it surely is. I am trying this. Thanks

Abhishek Yadav

unread,
May 20, 2021, 8:56:36 PM5/20/21
to Hazelcast
Hi,
Thank you vey much! After your suggestion, able to do predicate search with identified_data_serializer.

NOTE: Somehow for matching on key had to use __key.member rather than __key#member otherwise it match on value in imap. This was not mentioned in any example!!

1. I similarly want to do predicate search with global_serializer but that is not working.
Can you provide some example please where key and value both are user defined structures.

 2. hzmap.key_set<k, v>() failing when map has 54M records. 
  A. I believe if i limit using paging predict then it will not fail? Please point me to an example.
  B. I hope the paging predicate doesn't deserialise on server side?

Thanks!
Reply all
Reply to author
Forward
0 new messages