How to setup Hazelcast clients to not stop working if whole cluster is down?

849 views
Skip to first unread message

Ulf Gitschthaler

unread,
Jun 23, 2014, 5:27:03 AM6/23/14
to haze...@googlegroups.com
Hi, 

I setup hazelcast as 2nd level cache for our hibernate entities. The basic setup works fine but I am currently struggling with a full application freeze in case the cache server cluster goed down. The behavior I'd expect would be the following: 
  1. All hazelcast servers die (should be very rare hopefully) 
  2. Clients ask near cache for entity to load, not found
  3. Clients ask remote cache for enttiy to load, but server cluster is down
  4. Maybe some retries with some delay 
  5. Clients give back control to hibernate and ask it to fetch the enttity from the DB instead 

Instead I get the following exception when the cluster goes down immediately after the servers died: 

com.hazelcast.core.HazelcastInstanceNotActiveException: Hazelcast instance is not active!
at com
.hazelcast.client.HazelcastClientProxy.getClient(HazelcastClientProxy.java:245)
at com
.hazelcast.client.HazelcastClientProxy.getCluster(HazelcastClientProxy.java:120)
at com
.hazelcast.hibernate.HazelcastTimestamper.nextTimestamp(HazelcastTimestamper.java:29)
at com
.hazelcast.hibernate.AbstractHazelcastCacheRegionFactory.nextTimestamp(AbstractHazelcastCacheRegionFactory.java:65)
at org
.hibernate.impl.SessionFactoryImpl.openSession(SessionFactoryImpl.java:526)
at org
.hibernate.impl.SessionFactoryImpl.openSession(SessionFactoryImpl.java:535)

My configuration for the client looks like this: 

<hazelcast-client xsi:schemaLocation="http://www.hazelcast.com/schema/client-config
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    
    <network>
        <cluster-members>
        <address>127.0.0.1:5701</address>
            <address>127.0.0.1:5702</address>
        </cluster-members>
        <smart-routing>true</smart-routing>
        <redo-operation>false</redo-operation>
        <connection-timeout>100</connection-timeout>
        <connection-attempt-period>100</connection-attempt-period>
        <connection-attempt-limit>2</connection-attempt-limit>
    </network>

    <near-cache name="default">
        <max-size>5000</max-size>
        <max-idle-seconds>0</max-idle-seconds>
        <time-to-live-seconds>0</time-to-live-seconds>       
    </near-cache>
</hazelcast-client>


I wonder how we can setup the clients to continue working even if the hazelcast remote cluster is down?

Thx, 
Ulf

Jeffrey Hsie

unread,
Apr 12, 2015, 2:12:29 PM4/12/15
to haze...@googlegroups.com
Hi.

Is there any new info on this. We are planning to run a remote cache cluster which our app server will connect via Hazelcast client. In anticipation of any failure of network or cache cluster, it would be ideal if data operation can default to database only when cache fails

shara...@gmail.com

unread,
Dec 28, 2016, 11:14:31 AM12/28/16
to Hazelcast
HI Ulf,

Did you found any solution for above use case. We are also having similar design and came across this situation..?


Regards,
Sharad Keer

em...@hazelcast.com

unread,
Dec 29, 2016, 8:37:00 AM12/29/16
to Hazelcast
Hi Sharad,

Currently there's no fallback mechanism for Hazelcast's Hibernate second level cache. If the cluster is not accessible by the client, client starts to throw exceptions as it cannot fulfill it job. You can create an issue at https://github.com/hazelcast/hazelcast-hibernate/issues

David Zheng

unread,
Oct 22, 2021, 2:32:24 PM10/22/21
to Hazelcast
This feels like a comment / desirable pattern to have.
Is there any way to skip the cache if the cache is not available? An app shouldn't stop completely when it has the ability to continue processing a request just because of the cache not being available. 

David Brimley

unread,
Oct 25, 2021, 3:57:28 AM10/25/21
to Hazelcast
With the right client connection settings, you can ensure the near cache returns the values stored in it, even if the entire cluster goes down.

Take a read of this blog post...


David.

Reply all
Reply to author
Forward
0 new messages