Hazelcasr Cast Server Client Model - Near Cache Not working.

426 views
Skip to first unread message

Balamurugan Selvam

unread,
Sep 23, 2020, 3:27:04 AM9/23/20
to Hazelcast

Dear Team,

I am using Hazelcast for a simple database cache.

I have one spring boot application dedicated to act as a hazelcast server.

APPLICATION 1 - Hazelcast Server
Configuration -

@Configuration
public class HazlecastConfiguration{

     @Bean
    public
    HazelcastInstance hazlecastInstance() {
       
        EvictionConfig evictionConfig = new EvictionConfig()
                  .setEvictionPolicy(EvictionPolicy.NONE)
                  .setMaximumSizePolicy(MaxSizePolicy.ENTRY_COUNT)
                  .setSize(5000);

                NearCacheConfig nearCacheConfig = new NearCacheConfig()
                  .setInMemoryFormat(InMemoryFormat.OBJECT)
                  .setInvalidateOnChange(true)
                  .setTimeToLiveSeconds(600)
                  .setEvictionConfig(evictionConfig);

                Config config = new Config();
                config.getMapConfig("xref")
                  .setInMemoryFormat(InMemoryFormat.BINARY)
                  .setNearCacheConfig(nearCacheConfig);
       
        NetworkConfig network = config.getNetworkConfig();
        network.setPortAutoIncrement(true);
        network.setPort(14571);
        network.setPublicAddress("IPADDRESS"+":14571");
        config.setNetworkConfig(network);
        config.getManagementCenterConfig().setEnabled(true);
        JoinConfig join = network.getJoin();
        join.getMulticastConfig().setEnabled(false);
        join.getTcpIpConfig().setEnabled(true);
        return Hazelcast.newHazelcastInstance(config);
    }



The above application acts as a server, It loads data from database and puts the value in IMAP.


IMap<String, CrossRef> xrefMap = hazelcastInstance.getMap("xref");           
        int[] idx = { 0 };
        xrefRepository.findAllCrossRefForOrderRelease().forEach(xrefelement ->{
            xrefMap.put(String.valueOf(idx[0]++),xrefelement);
        });






APPLICATION - 2

Client Application -  We have another application which does a business of transforming one java model to another java model which uses some cached value on appending data.


Configuration -

 @Bean
    public HazelcastInstance hazelcastInstance()
    {
         
         
         HazelcastInstance member = Hazelcast.newHazelcastInstance();
         ClientConfig config = new ClientConfig();
         config.getConnectionStrategyConfig().setReconnectMode
         (ClientConnectionStrategyConfig.ReconnectMode.ASYNC);
         config.getConnectionStrategyConfig().getConnectionRetryConfig();
         ClientNetworkConfig networkConfig = config.getNetworkConfig();
            addressList = new ArrayList<>();
            addressList.add("10.140.127.248:14571");
            networkConfig.setAddresses(addressList);
            networkConfig.addAddress(addressList.toArray(new String[addressList.size()]));
            config.setNetworkConfig(networkConfig);
        // .setClusterConnectTimeoutMillis(Integer.MAX_VALUE);
         NearCacheConfig nearCacheConfig = null;
         if(nearCacheConfig == null)
            {
                nearCacheConfig = new NearCacheConfig("xref")
                    .setInMemoryFormat(InMemoryFormat.OBJECT)
                    .setInvalidateOnChange(false)
                    .setCacheLocalEntries(true);
            }

           
            Map<String, NearCacheConfig> nearCacheConfigMap = new HashMap<String,NearCacheConfig> ();
            nearCacheConfigMap.put("xrefLocal", nearCacheConfig);
            config.setNearCacheConfigMap(nearCacheConfigMap);
         
         ClientConnectionStrategyConfig connectionStrategyConfig =
                    config.getConnectionStrategyConfig();
                    connectionStrategyConfig.setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC);
                    ConnectionRetryConfig connectionRetryConfig = connectionStrategyConfig.getConnectionRetryConfig();
                    connectionRetryConfig.setInitialBackoffMillis(Integer.MAX_VALUE)
                    .setMaxBackoffMillis(Integer.MAX_VALUE) .setMultiplier(1) .setJitter(0.2);
                    connectionRetryConfig.setFailOnMaxBackoff(true);
                    connectionRetryConfig.setEnabled(true);
                    config.setConnectionStrategyConfig(connectionStrategyConfig);
         
         
         HazelcastInstance client = HazelcastClient.newHazelcastClient(config);
         return client;
    }





PROBLEM --   The data from server is retrieved in client application and all works fine.  But when server application is down, Near cache is not working so that the client application can't perform it's transformation process and whole process get shutdown. 


Kindly help in solving this nearcache issue on the client configuration.

Thanks,
Balamurugan Selvam

M. Sancar Koyunlu

unread,
Sep 23, 2020, 4:23:20 AM9/23/20
to Hazelcast
Hi Balamurugan, 
I have investigated your client configuration. 
For non-stop near cache behavior, you should keep your client open always. The blog post is for the 4.x series, and some of the configurations have changed between 3.12.x and 4.x

In 3.12.x, if you need to set the following to false to keep your client always up. 
`connectionRetryConfig.setFailOnMaxBackoff(false);`
You need to set `MaxBackoffMillis` to a low value so that when your members are up again, your client can connect back. With your config(MAX_VALUE), the client will not connect back when members are up again. 

`setCacheLocalEntries(true);` is a member side configuration. If you set it on 3.12.8 client, your client will not open and throw 
java.lang.IllegalArgumentException: The Near Cache option `cache-local-entries` is not supported in client configurations.
So remove `
setCacheLocalEntries` from client nearcache config. 

The Member side near cache config does not affect the client-side near cache. So I am skipping that part. Here is a complete client setup for your client use-case. 


HazelcastInstance instance = Hazelcast.newHazelcastInstance();

ClientConfig clientConfig = new ClientConfig();
NearCacheConfig clientNearCacheConfig = new NearCacheConfig("xref")
.setInMemoryFormat(InMemoryFormat.OBJECT)
.setInvalidateOnChange(false);

clientConfig.addNearCacheConfig(clientNearCacheConfig);

ClientConnectionStrategyConfig connectionStrategyConfig = clientConfig.getConnectionStrategyConfig();

connectionStrategyConfig.setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC);
ConnectionRetryConfig connectionRetryConfig = connectionStrategyConfig.getConnectionRetryConfig();
connectionRetryConfig.setInitialBackoffMillis(1000)
.setMaxBackoffMillis(10000).setMultiplier(1).setJitter(0.2);
connectionRetryConfig.setFailOnMaxBackoff(false);
connectionRetryConfig.setEnabled(true);
clientConfig.setConnectionStrategyConfig(connectionStrategyConfig);


HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);

IMap<Object, Object> map = client.getMap("xref");

for (int i = 0; i < 1000; i++) {
map.put(i, i);
}

for (int i = 0; i < 1000; i++) {
//populates the nearcache
    map.get(i);
}

instance.shutdown();

Random random = new Random();
while (true) {
Thread.sleep(5000);
System.out.println("get a cached key");
System.out.println(map.get(random.nextInt(1000)));

try {
System.out.println("Try to get a non cached key, should result with exception without blocking the thread ");
map.get(10001);
} catch (HazelcastClientOfflineException e) {
System.out.println("Get exception " + e);

}
} > The data from server is retrieved in client application and all works fine.  But when server application is down, Near cache is not working so that the client application can't perform it's transformation process and whole process get shutdown. 
I hope this helps. If your client is closing please share the reason. Are you closing the client yourself or is it closing unexpectedly? If it is closing unexpectedly, the hazelcast client logs should tell us why it is the case. 
And what exactly do you mean by ` Near cache is not working`. 
Are you getting an exception from `map.get` calls? Or is it blocking your thread?


--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hazelcast+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hazelcast/693a1091-6d94-43ca-9127-552ccd0b8dd2n%40googlegroups.com.


--
Sancar Koyunlu
Software Engineer
   hazelcast®
 
 
2 W 5th Ave, Ste 300 | San Mateo, CA 94402 | USA
+1 (650) 521-5453 | hazelcast.com



This message contains confidential information and is intended only for the individuals named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. If verification is required, please request a hard-copy version. -Hazelcast

Balamurugan Selvam

unread,
Sep 23, 2020, 4:42:34 AM9/23/20
to haze...@googlegroups.com
Hi Sancar,

Many thanks for your reply.

When I shutdown Member instance manually, Client application throws this below exception. And client application can't perform the business logic too.




2020-09-23 14:00:50.154  INFO 3744 --- [ient_1.cluster-] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is CLIENT_CONNECTED
2020-09-23 14:00:50.154  INFO 3744 --- [           main] c.h.internal.diagnostics.Diagnostics     : hz.client_1 [dev] [3.12.8] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-09-23 14:00:50.332  INFO 3744 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2020-09-23 14:00:50.564  INFO 3744 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8089 (http) with context path ''
2020-09-23 14:00:50.573  INFO 3744 --- [           main] .r.r.RfilHazelcasrCacheClientApplication : Started RfilHazelcasrCacheClientApplication in 11.169 seconds (JVM running for 11.95)
2020-09-23 14:00:59.853  INFO 3744 --- [nio-8089-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
2020-09-23 14:00:59.853  INFO 3744 --- [nio-8089-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2020-09-23 14:00:59.860  INFO 3744 --- [nio-8089-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 6 ms
2020-09-23 14:01:21.096  INFO 3744 --- [.IO.thread-in-0] c.h.c.connection.nio.ClientConnection    : hz.client_1 [dev] [3.12.8] ClientConnection{alive=false, connectionId=1, channel=NioChannel{/10.140.127.248:64496->/10.140.127.248:14571}, remoteEndpoint=[10.140.127.248]:14571, lastReadTime=2020-09-23 14:01:21.095, lastWriteTime=2020-09-23 14:01:20.380, closedTime=2020-09-23 14:01:21.095, connected server version=3.12.8} closed. Reason: Connection closed by the other side
2020-09-23 14:01:21.098  INFO 3744 --- [.IO.thread-in-0] c.h.c.c.ClientConnectionManager          : hz.client_1 [dev] [3.12.8] Removed connection to endpoint: [10.140.127.248]:14571, connection: ClientConnection{alive=false, connectionId=1, channel=NioChannel{/10.140.127.248:64496->/10.140.127.248:14571}, remoteEndpoint=[10.140.127.248]:14571, lastReadTime=2020-09-23 14:01:21.095, lastWriteTime=2020-09-23 14:01:20.380, closedTime=2020-09-23 14:01:21.095, connected server version=3.12.8}
2020-09-23 14:01:21.102  INFO 3744 --- [ient_1.cluster-] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is CLIENT_DISCONNECTED
2020-09-23 14:01:21.102  INFO 3744 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to cluster with name: dev
2020-09-23 14:01:21.102  INFO 3744 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 14:01:22.111  WARN 3744 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Exception during initial connection to [10.140.127.248]:14571: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused: no further information to address /10.140.127.248:14571
2020-09-23 14:01:22.112  WARN 3744 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Unable to get live cluster connection, retry in 828 ms, attempt 1, retry timeout millis 10000 cap
2020-09-23 14:01:22.942  INFO 3744 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 14:01:23.009 ERROR 3744 --- [nio-8089-exec-5] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.hazelcast.client.HazelcastClientOfflineException: Client is offline.] with root cause

com.hazelcast.client.HazelcastClientOfflineException: Client is offline.
at com.hazelcast.client.connection.nio.DefaultClientConnectionStrategy.beforeGetConnection(DefaultClientConnectionStrategy.java:66) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl.checkAllowed(ClientConnectionManagerImpl.java:300) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl.getConnection(ClientConnectionManagerImpl.java:272) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl.getOrTriggerConnect(ClientConnectionManagerImpl.java:263) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.impl.SmartClientInvocationService.getOrTriggerConnect(SmartClientInvocationService.java:73) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.impl.SmartClientInvocationService.invokeOnRandomTarget(SmartClientInvocationService.java:58) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.impl.ClientInvocation.invokeOnSelection(ClientInvocation.java:167) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.impl.ClientInvocation.invoke(ClientInvocation.java:146) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.ClientProxy.invoke(ClientProxy.java:251) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.proxy.ClientMapProxy.values(ClientMapProxy.java:1254) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.radial.rfil.XrefUIController.getAllXrefs(XrefUIController.java:37) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:564) ~[na:na]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:105) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:878) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:792) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:626) ~[tomcat-embed-core-9.0.37.jar:4.0.FR]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:733) ~[tomcat-embed-core-9.0.37.jar:4.0.FR]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:373) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1589) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) ~[na:na]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
at ------ submitted from ------.(Unknown Source) ~[na:na]
at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:96) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:33) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:155) ~[hazelcast-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.ClientProxy.invoke(ClientProxy.java:252) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.proxy.ClientMapProxy.values(ClientMapProxy.java:1254) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.radial.rfil.XrefUIController.getAllXrefs(XrefUIController.java:37) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:564) ~[na:na]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:105) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:878) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:792) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:626) ~[tomcat-embed-core-9.0.37.jar:4.0.FR]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:733) ~[tomcat-embed-core-9.0.37.jar:4.0.FR]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.8.RELEASE.jar:5.2.8.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:373) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1589) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) ~[na:na]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.37.jar:9.0.37]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]



Is it possible to have the cache value in the client application when the member is down ?  So that business won't get affected, and once member is Up automatically the client gets connected to the member.

Please help on this request.

Thanks,
Balamurugan Selvam


You received this message because you are subscribed to a topic in the Google Groups "Hazelcast" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/hazelcast/-Y0lbtCdv60/unsubscribe.
To unsubscribe from this group and all its topics, send an email to hazelcast+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hazelcast/CAEj0St-0w%2BoNyrmTSh%3DGUMtYp%2BsP%3DU44h1vtj%2BEX_%3DTwjcOk8A%40mail.gmail.com.


--
Balamurugan Selvam
+91 99415 95996

M. Sancar Koyunlu

unread,
Sep 23, 2020, 4:58:10 AM9/23/20
to Hazelcast
HazelcastClientOfflineException does not mean the client is shutdown. On the blogpost and un my example, I am catching this exception on purpose. 
This is the exception that you should actively wait for, when the members are down. 
This exception is thrown when the key is not in the near-cache. If it is in the near cache, it will just return the value without exception. 

HazelcastClientOfflineException is thrown by the client while it is in DISCONNECTED state and it needs to access the cluster(When reconnect mode is ASYNC). 
The client will connect back to the cluster when it is back up again. 

As a side note, when the client shuts down, it throws HazelcastClientNotActiveException and can not recover from this state. With the config I have suggested, you will never get into this state. 





Balamurugan Selvam

unread,
Sep 23, 2020, 5:25:11 AM9/23/20
to haze...@googlegroups.com
Many thanks Sancar for your kind guidance. 

Can you please help me in understanding why My near cache is not working even though I have configured it?

As you advised When I have the near cache value I should see the above exception. That's the exact scenario I need in my application.
kindly guide me on that.


Client configuration after your advice -



 HazelcastInstance instance = Hazelcast.newHazelcastInstance();

ClientConfig clientConfig = new ClientConfig();
NearCacheConfig clientNearCacheConfig = new NearCacheConfig("xref")
        .setInMemoryFormat(InMemoryFormat.OBJECT)
        .setInvalidateOnChange(false);

clientConfig.addNearCacheConfig(clientNearCacheConfig);
ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig();

addressList = new ArrayList<>();
addressList.add("10.140.127.248:14571");
networkConfig.setAddresses(addressList);
networkConfig.addAddress(addressList.toArray(new String[addressList.size()]));
clientConfig.setNetworkConfig(networkConfig);

ClientConnectionStrategyConfig connectionStrategyConfig = clientConfig.getConnectionStrategyConfig();
connectionStrategyConfig.setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC);
ConnectionRetryConfig connectionRetryConfig = connectionStrategyConfig.getConnectionRetryConfig();
connectionRetryConfig.setInitialBackoffMillis(1000)
        .setMaxBackoffMillis(10000).setMultiplier(1).setJitter(0.2);
connectionRetryConfig.setFailOnMaxBackoff(false);
connectionRetryConfig.setEnabled(true);
clientConfig.setConnectionStrategyConfig(connectionStrategyConfig);
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
return client;
}



My Controller which Picks the map value -

@GetMapping("/xref/references")
    public String getAllXrefs(Model model) {
IMap<String,CrossRef> xref = hazelcastInstance.getMap("xref");  
 model.addAttribute("xrefs", new ArrayList<CrossRef>(xref.values()));
       return "xref/list";
   
    }




I need these values to be there always even though members are down. Kindly help on this !!






Thanks,
Balamurugan Selvam

M. Sancar Koyunlu

unread,
Sep 23, 2020, 6:03:09 AM9/23/20
to Hazelcast
Client populates the near cache on `map.get()` methods. If you did not do any request yet, there will be nothing in the near cache. 
If you want your near caches fully populated, you can call map.get for all of your keys at the beginning while the members are up. 

Imap.values() operation does not work with the near cache. Since `values` by contract should return all the values, it should ask to the remote. And when it attempts to do a remote call it will return  `HazelcastClientOfflineException`

Alternatively, you can use getAll with the keys that you are sure they are in the near cache. Here is the modified example;

```
HazelcastInstance instance = Hazelcast.newHazelcastInstance();

ClientConfig clientConfig = new ClientConfig();
NearCacheConfig clientNearCacheConfig = new NearCacheConfig("xref")
.setInMemoryFormat(InMemoryFormat.OBJECT)
.setInvalidateOnChange(false);

clientConfig.addNearCacheConfig(clientNearCacheConfig);

ClientConnectionStrategyConfig connectionStrategyConfig = clientConfig.getConnectionStrategyConfig();
connectionStrategyConfig.setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC);
ConnectionRetryConfig connectionRetryConfig = connectionStrategyConfig.getConnectionRetryConfig();
connectionRetryConfig.setInitialBackoffMillis(1000)
.setMaxBackoffMillis(10000).setMultiplier(1).setJitter(0.2);
connectionRetryConfig.setFailOnMaxBackoff(false);
connectionRetryConfig.setEnabled(true);
clientConfig.setConnectionStrategyConfig(connectionStrategyConfig);


HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);

IMap<Object, Object> map = client.getMap("xref");

for (int i = 0; i < 1000; i++) {
map.put(i, i);
}

for (int i = 0; i < 1000; i++) {
    map.get(i);
}

instance.shutdown();

HashSet<Object> objects = new HashSet<>();

for (int i = 0; i < 1000; i++) {
    objects.add(i);
}
try {
System.out.println("Successful call. Size should be 1000 : " + map.getAll(objects).size());
} catch (HazelcastClientOfflineException e) {
//We do not expect HazelcastClientOfflineException in this example
} ```

Balamurugan Selvam

unread,
Sep 23, 2020, 7:09:17 AM9/23/20
to haze...@googlegroups.com
Hi Sancar,

Still no luck..   I just used the below same setting. It works on startup..   But when I use the Autowire  annotation on the controller part.   I can't get the near cache when member is down.. 


Client Configuration -


 @Bean
public HazelcastInstance hazelcastInstance()
{

ClientConfig clientConfig = new ClientConfig();
NearCacheConfig clientNearCacheConfig = new NearCacheConfig("xrefLocal")

        .setInMemoryFormat(InMemoryFormat.OBJECT)
        .setInvalidateOnChange(false);
clientConfig.addNearCacheConfig(clientNearCacheConfig);
ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig();
addressList = new ArrayList<>();
addressList.add("10.140.127.248:14571");
networkConfig.setAddresses(addressList);
networkConfig.addAddress(addressList.toArray(new String[addressList.size()]));
clientConfig.setNetworkConfig(networkConfig);
ClientConnectionStrategyConfig connectionStrategyConfig = clientConfig.getConnectionStrategyConfig();
connectionStrategyConfig.setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC);
ConnectionRetryConfig connectionRetryConfig = connectionStrategyConfig.getConnectionRetryConfig();
connectionRetryConfig.setInitialBackoffMillis(1000)
        .setMaxBackoffMillis(10000).setMultiplier(1).setJitter(0.2);
connectionRetryConfig.setFailOnMaxBackoff(false);
connectionRetryConfig.setEnabled(true);
clientConfig.setConnectionStrategyConfig(connectionStrategyConfig);
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
IMap<String, CrossRef> map = client.getMap("xref");
for (int i = 0; i < map.size(); i++) {
    map.put(Integer.toString(i), map.get(Integer.toString(i)));
}

for (int i = 0; i < map.size(); i++) {
    map.get(Integer.toString(i));
}

HashSet<CrossRef> objects = new HashSet<>();
for (int i = 0; i < map.size(); i++) {
    objects.add(map.get(Integer.toString(i)));
}
try {
    System.out.println("Successful call. Size should be : " + objects.size() + " --> "   + map.size());

} catch (HazelcastClientOfflineException e) {
    //We do not expect  HazelcastClientOfflineException in this example
}








On startup all works fine..  


My Controller -

@Autowired
private HazelcastInstance hazelcastInstance;

@GetMapping("/xref/references")
    public String getAllXrefs(Model model) {
IMap<String,CrossRef> xref = hazelcastInstance.getMap("xref");
List<CrossRef> ref = new ArrayList<CrossRef>();
for(int i=0; i<xref.size(); i++) {
ref.add(xref.get(Integer.toString(i)));
}
 model.addAttribute("xrefs", ref);
       return "xref/list";
   
    }



When I start the client  application  once member is up and running. I see no issues on the startup, All loads up and

Members {size:1, ver:1} [
Member [192.168.1.102]:5702 - baf13358-0c3a-4f0e-90c2-aff2673ed0b1 this
]

2020-09-23 16:31:35.667  WARN 12840 --- [           main] com.hazelcast.instance.Node              : [192.168.1.102]:5702 [dev] [3.12.8] Config seed port is 5701 and cluster size is 1. Some of the ports seem occupied!
2020-09-23 16:31:35.668  INFO 12840 --- [           main] com.hazelcast.core.LifecycleService      : [192.168.1.102]:5702 [dev] [3.12.8] [192.168.1.102]:5702 is STARTED
2020-09-23 16:31:35.681  INFO 12840 --- [           main] com.hazelcast.client.HazelcastClient     : hz.client_1 [dev] [3.12.8] A non-empty group password is configured for the Hazelcast client. Starting with Hazelcast version 3.11, clients with the same group name, but with different group passwords (that do not use authentication) will be accepted to a cluster. The group password configuration will be removed completely in a future release.
2020-09-23 16:31:35.703  INFO 12840 --- [           main] c.h.client.spi.ClientInvocationService   : hz.client_1 [dev] [3.12.8] Running with 2 response threads, dynamic=false
2020-09-23 16:31:35.742  INFO 12840 --- [           main] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is STARTING
2020-09-23 16:31:35.743  INFO 12840 --- [           main] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is STARTED
2020-09-23 16:31:35.752  INFO 12840 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to cluster with name: dev
2020-09-23 16:31:35.756  INFO 12840 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 16:31:35.774  INFO 12840 --- [nt_1.internal-2] c.h.c.c.ClientConnectionManager          : hz.client_1 [dev] [3.12.8] Setting ClientConnection{alive=true, connectionId=1, channel=NioChannel{/10.140.127.248:63301->/10.140.127.248:14571}, remoteEndpoint=[10.140.127.248]:14571, lastReadTime=2020-09-23 16:31:35.770, lastWriteTime=2020-09-23 16:31:35.770, closedTime=never, connected server version=3.12.8} as owner with principal ClientPrincipal{uuid='6406ae1f-1246-4f3c-aec8-70929208e0a8', ownerUuid='ebd3a457-410b-4ae1-8266-a50c55a0efe2'}
2020-09-23 16:31:35.774  INFO 12840 --- [nt_1.internal-2] c.h.c.c.ClientConnectionManager          : hz.client_1 [dev] [3.12.8] Authenticated with server [10.140.127.248]:14571, server version:3.12.8 Local address: /10.140.127.248:63301
2020-09-23 16:31:35.781  INFO 12840 --- [ient_1.event-15] c.h.c.spi.impl.ClientMembershipListener  : hz.client_1 [dev] [3.12.8]

Members [1] {
Member [10.140.127.248]:14571 - ebd3a457-410b-4ae1-8266-a50c55a0efe2
}

2020-09-23 16:31:35.783  INFO 12840 --- [ient_1.cluster-] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is CLIENT_CONNECTED
2020-09-23 16:31:35.785  INFO 12840 --- [           main] c.h.internal.diagnostics.Diagnostics     : hz.client_1 [dev] [3.12.8] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
Successful call. Size should be : 1314 --> 1314
2020-09-23 16:31:39.835  INFO 12840 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2020-09-23 16:31:40.040  INFO 12840 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8089 (http) with context path ''
2020-09-23 16:31:40.050  INFO 12840 --- [           main] .r.r.RfilHazelcasrCacheClientApplication : Started RfilHazelcasrCacheClientApplication in 12.11 seconds (JVM running for 12.783)



But when I manually stops the member node,  And when I hit the UI, I am not seeing the values populated, Instead I am getting same exception.  

Can you help me which code I need to use on the controller to access the near cache.

Thanks,
Balamurugan Selvam




M. Sancar Koyunlu

unread,
Sep 23, 2020, 7:16:12 AM9/23/20
to Hazelcast
In your last mail, the name of the nearcache config is `xrefLocal` 
and the name of the map is `xref` . They need to be the same. Can you fix that and retry ?



--
Sancar Koyunlu
Software Engineer
   hazelcast®
 
 
2 W 5th Ave, Ste 300 | San Mateo, CA 94402 | USA
+1 (650) 521-5453 | hazelcast.com


Balamurugan Selvam

unread,
Sep 23, 2020, 7:33:21 AM9/23/20
to haze...@googlegroups.com
I made the changes. I set everything to xref.. 

Same issue, When I start the member application and then Client application all works fine.. 
When I shutdown member manually,

My current Client config

@Bean
public HazelcastInstance hazelcastInstance()
{
ClientConfig clientConfig = new ClientConfig();
NearCacheConfig clientNearCacheConfig = new NearCacheConfig("xref")
        .setInMemoryFormat(InMemoryFormat.OBJECT)
        .setInvalidateOnChange(false);
clientConfig.addNearCacheConfig(clientNearCacheConfig);
ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig();
addressList = new ArrayList<>();
addressList.add("10.140.127.248:14571");
networkConfig.setAddresses(addressList);
networkConfig.addAddress(addressList.toArray(new String[addressList.size()]));
clientConfig.setNetworkConfig(networkConfig);
ClientConnectionStrategyConfig connectionStrategyConfig = clientConfig.getConnectionStrategyConfig();
connectionStrategyConfig.setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC);
ConnectionRetryConfig connectionRetryConfig = connectionStrategyConfig.getConnectionRetryConfig();
connectionRetryConfig.setInitialBackoffMillis(1000)
        .setMaxBackoffMillis(10000).setMultiplier(1).setJitter(0.2);
connectionRetryConfig.setFailOnMaxBackoff(false);
connectionRetryConfig.setEnabled(true);
clientConfig.setConnectionStrategyConfig(connectionStrategyConfig);
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
IMap<String, CrossRef> map = client.getMap("xref");
for (int i = 0; i < map.size(); i++) {
    map.put(Integer.toString(i), map.get(Integer.toString(i)));
}

for (int i = 0; i < map.size(); i++) {
    map.get(Integer.toString(i));
}

HashSet<CrossRef> objects = new HashSet<>();
for (int i = 0; i < map.size(); i++) {
    objects.add(map.get(Integer.toString(i)));
}
try {
    System.out.println("Successful call. Size should be : " + objects.size() + " --> "   + map.size());
} catch (HazelcastClientOfflineException e) {
    //We do not expect  HazelcastClientOfflineException in this example
}
return client;
}



my controller -

@Autowired
private HazelcastInstance hazelcastInstance;

@GetMapping("/xref/references")
    public String getAllXrefs(Model model) {
IMap<String,CrossRef> xref = hazelcastInstance.getMap("xref");
List<CrossRef> ref = new ArrayList<CrossRef>();
for(int i=0; i<xref.size(); i++) {
ref.add(xref.get(Integer.toString(i)));
}
 model.addAttribute("xrefs", ref);
       return "xref/list";
   
    }


Getting below exception --  

OpenJDK 64-Bit Server VM warning: Options -Xverify:none and -noverify were deprecated in JDK 13 and will likely be removed in a future release.

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.3.2.RELEASE)

2020-09-23 16:54:03.089  INFO 7716 --- [           main] .r.r.RfilHazelcasrCacheClientApplication : Starting RfilHazelcasrCacheClientApplication on CHE-46FWGH2 with PID 7716 (D:\RFILProject\SpringTools\GIT_SOURCECODE\hazelcast-client-working-test-application\target\classes started by bselvam in D:\RFILProject\SpringTools\GIT_SOURCECODE\hazelcast-client-working-test-application)
2020-09-23 16:54:03.093  INFO 7716 --- [           main] .r.r.RfilHazelcasrCacheClientApplication : No active profile set, falling back to default profiles: default
2020-09-23 16:54:03.998  WARN 7716 --- [           main] c.h.instance.HazelcastInstanceFactory    : Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:
 --add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
2020-09-23 16:54:04.001  INFO 7716 --- [           main] c.h.config.AbstractConfigLocator         : Loading 'hazelcast-default.xml' from the classpath.
2020-09-23 16:54:04.379  INFO 7716 --- [           main] com.hazelcast.instance.AddressPicker     : [LOCAL] [dev] [3.12.8] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2020-09-23 16:54:04.428  INFO 7716 --- [           main] com.hazelcast.instance.AddressPicker     : [LOCAL] [dev] [3.12.8] Picked [192.168.1.102]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
2020-09-23 16:54:04.437  INFO 7716 --- [           main] com.hazelcast.system                     : [192.168.1.102]:5701 [dev] [3.12.8] Hazelcast 3.12.8 (20200625 - 35a975e) starting at [192.168.1.102]:5701
2020-09-23 16:54:04.437  INFO 7716 --- [           main] com.hazelcast.system                     : [192.168.1.102]:5701 [dev] [3.12.8] Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.
2020-09-23 16:54:04.707  INFO 7716 --- [           main] c.h.s.i.o.impl.BackpressureRegulator     : [192.168.1.102]:5701 [dev] [3.12.8] Backpressure is disabled
2020-09-23 16:54:05.526  INFO 7716 --- [           main] com.hazelcast.instance.Node              : [192.168.1.102]:5701 [dev] [3.12.8] Creating MulticastJoiner
2020-09-23 16:54:05.797  INFO 7716 --- [           main] c.h.s.i.o.impl.OperationExecutorImpl     : [192.168.1.102]:5701 [dev] [3.12.8] Starting 4 partition threads and 3 generic threads (1 dedicated for priority tasks)
2020-09-23 16:54:05.799  INFO 7716 --- [           main] c.h.internal.diagnostics.Diagnostics     : [192.168.1.102]:5701 [dev] [3.12.8] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-09-23 16:54:05.813  INFO 7716 --- [           main] com.hazelcast.core.LifecycleService      : [192.168.1.102]:5701 [dev] [3.12.8] [192.168.1.102]:5701 is STARTING
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/C:/Users/bselvam/.m2/repository/com/hazelcast/hazelcast/3.12.8/hazelcast-3.12.8.jar) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2020-09-23 16:54:08.193  INFO 7716 --- [           main] c.h.internal.cluster.ClusterService      : [192.168.1.102]:5701 [dev] [3.12.8]

Members {size:1, ver:1} [
Member [192.168.1.102]:5701 - 7bd0d36c-900d-4d6b-9b00-2d7d46d34245 this
]

2020-09-23 16:54:08.212  INFO 7716 --- [           main] com.hazelcast.core.LifecycleService      : [192.168.1.102]:5701 [dev] [3.12.8] [192.168.1.102]:5701 is STARTED
2020-09-23 16:54:08.609  INFO 7716 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8089 (http)
2020-09-23 16:54:08.622  INFO 7716 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2020-09-23 16:54:08.622  INFO 7716 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet engine: [Apache Tomcat/9.0.37]
2020-09-23 16:54:08.767  INFO 7716 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2020-09-23 16:54:08.767  INFO 7716 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 5612 ms
2020-09-23 16:54:08.810  INFO 7716 --- [           main] c.h.config.AbstractConfigLocator         : Loading 'hazelcast-default.xml' from the classpath.
2020-09-23 16:54:08.883  INFO 7716 --- [           main] com.hazelcast.instance.AddressPicker     : [LOCAL] [dev] [3.12.8] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2020-09-23 16:54:08.947  INFO 7716 --- [           main] com.hazelcast.instance.AddressPicker     : [LOCAL] [dev] [3.12.8] Picked [192.168.1.102]:5702, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5702], bind any local is true
2020-09-23 16:54:08.948  INFO 7716 --- [           main] com.hazelcast.system                     : [192.168.1.102]:5702 [dev] [3.12.8] Hazelcast 3.12.8 (20200625 - 35a975e) starting at [192.168.1.102]:5702
2020-09-23 16:54:08.948  INFO 7716 --- [           main] com.hazelcast.system                     : [192.168.1.102]:5702 [dev] [3.12.8] Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.
2020-09-23 16:54:08.952  INFO 7716 --- [           main] c.h.s.i.o.impl.BackpressureRegulator     : [192.168.1.102]:5702 [dev] [3.12.8] Backpressure is disabled
2020-09-23 16:54:09.005  INFO 7716 --- [           main] com.hazelcast.instance.Node              : [192.168.1.102]:5702 [dev] [3.12.8] Creating MulticastJoiner
2020-09-23 16:54:09.011  INFO 7716 --- [           main] c.h.s.i.o.impl.OperationExecutorImpl     : [192.168.1.102]:5702 [dev] [3.12.8] Starting 4 partition threads and 3 generic threads (1 dedicated for priority tasks)
2020-09-23 16:54:09.012  INFO 7716 --- [           main] c.h.internal.diagnostics.Diagnostics     : [192.168.1.102]:5702 [dev] [3.12.8] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-09-23 16:54:09.013  INFO 7716 --- [           main] com.hazelcast.core.LifecycleService      : [192.168.1.102]:5702 [dev] [3.12.8] [192.168.1.102]:5702 is STARTING
2020-09-23 16:54:11.298  INFO 7716 --- [           main] c.h.internal.cluster.ClusterService      : [192.168.1.102]:5702 [dev] [3.12.8]

Members {size:1, ver:1} [
Member [192.168.1.102]:5702 - 1198b862-0542-4a3b-8611-7edc3afc9b04 this
]

2020-09-23 16:54:11.298  WARN 7716 --- [           main] com.hazelcast.instance.Node              : [192.168.1.102]:5702 [dev] [3.12.8] Config seed port is 5701 and cluster size is 1. Some of the ports seem occupied!
2020-09-23 16:54:11.300  INFO 7716 --- [           main] com.hazelcast.core.LifecycleService      : [192.168.1.102]:5702 [dev] [3.12.8] [192.168.1.102]:5702 is STARTED
2020-09-23 16:54:11.317  INFO 7716 --- [           main] com.hazelcast.client.HazelcastClient     : hz.client_1 [dev] [3.12.8] A non-empty group password is configured for the Hazelcast client. Starting with Hazelcast version 3.11, clients with the same group name, but with different group passwords (that do not use authentication) will be accepted to a cluster. The group password configuration will be removed completely in a future release.
2020-09-23 16:54:11.346  INFO 7716 --- [           main] c.h.client.spi.ClientInvocationService   : hz.client_1 [dev] [3.12.8] Running with 2 response threads, dynamic=false
2020-09-23 16:54:11.391  INFO 7716 --- [           main] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is STARTING
2020-09-23 16:54:11.392  INFO 7716 --- [           main] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is STARTED
2020-09-23 16:54:11.400  INFO 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to cluster with name: dev
2020-09-23 16:54:11.405  INFO 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 16:54:11.426  INFO 7716 --- [nt_1.internal-2] c.h.c.c.ClientConnectionManager          : hz.client_1 [dev] [3.12.8] Setting ClientConnection{alive=true, connectionId=1, channel=NioChannel{/10.140.127.248:63767->/10.140.127.248:14571}, remoteEndpoint=[10.140.127.248]:14571, lastReadTime=2020-09-23 16:54:11.422, lastWriteTime=2020-09-23 16:54:11.419, closedTime=never, connected server version=3.12.8} as owner with principal ClientPrincipal{uuid='8ab1675b-34d8-4df4-977a-f0cb94f40d7e', ownerUuid='ebd3a457-410b-4ae1-8266-a50c55a0efe2'}
2020-09-23 16:54:11.426  INFO 7716 --- [nt_1.internal-2] c.h.c.c.ClientConnectionManager          : hz.client_1 [dev] [3.12.8] Authenticated with server [10.140.127.248]:14571, server version:3.12.8 Local address: /10.140.127.248:63767
2020-09-23 16:54:11.432  INFO 7716 --- [ient_1.event-15] c.h.c.spi.impl.ClientMembershipListener  : hz.client_1 [dev] [3.12.8]

Members [1] {
Member [10.140.127.248]:14571 - ebd3a457-410b-4ae1-8266-a50c55a0efe2
}

2020-09-23 16:54:11.434  INFO 7716 --- [ient_1.cluster-] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is CLIENT_CONNECTED
2020-09-23 16:54:11.435  INFO 7716 --- [           main] c.h.internal.diagnostics.Diagnostics     : hz.client_1 [dev] [3.12.8] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.

Successful call. Size should be : 1314 --> 1314
2020-09-23 16:54:15.200  INFO 7716 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2020-09-23 16:54:15.395  INFO 7716 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8089 (http) with context path ''
2020-09-23 16:54:15.404  INFO 7716 --- [           main] .r.r.RfilHazelcasrCacheClientApplication : Started RfilHazelcasrCacheClientApplication in 12.744 seconds (JVM running for 13.645)
2020-09-23 16:54:28.424  INFO 7716 --- [nio-8089-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
2020-09-23 16:54:28.424  INFO 7716 --- [nio-8089-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2020-09-23 16:54:28.431  INFO 7716 --- [nio-8089-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 7 ms
2020-09-23 16:54:31.305  INFO 7716 --- [v.HealthMonitor] c.h.internal.diagnostics.HealthMonitor   : [192.168.1.102]:5702 [dev] [3.12.8] processors=4, physical.memory.total=15.9G, physical.memory.free=5.0G, swap.space.total=0, swap.space.free=0, heap.memory.used=44.9M, heap.memory.free=36.1M, heap.memory.total=81.0M, heap.memory.max=4.0G, heap.memory.used/total=55.49%, heap.memory.used/max=1.10%, minor.gc.count=0, minor.gc.time=0ms, major.gc.count=0, major.gc.time=0ms, load.process=0.00%, load.system=72.92%, load.systemAverage=n/a thread.count=109, thread.peakCount=109, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=1, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=0, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
2020-09-23 16:54:55.316  INFO 7716 --- [.IO.thread-in-0] c.h.c.connection.nio.ClientConnection    : hz.client_1 [dev] [3.12.8] ClientConnection{alive=false, connectionId=1, channel=NioChannel{/10.140.127.248:63767->/10.140.127.248:14571}, remoteEndpoint=[10.140.127.248]:14571, lastReadTime=2020-09-23 16:54:55.315, lastWriteTime=2020-09-23 16:54:52.219, closedTime=2020-09-23 16:54:55.315, connected server version=3.12.8} closed. Reason: Connection closed by the other side
2020-09-23 16:54:55.318  INFO 7716 --- [.IO.thread-in-0] c.h.c.c.ClientConnectionManager          : hz.client_1 [dev] [3.12.8] Removed connection to endpoint: [10.140.127.248]:14571, connection: ClientConnection{alive=false, connectionId=1, channel=NioChannel{/10.140.127.248:63767->/10.140.127.248:14571}, remoteEndpoint=[10.140.127.248]:14571, lastReadTime=2020-09-23 16:54:55.315, lastWriteTime=2020-09-23 16:54:52.219, closedTime=2020-09-23 16:54:55.315, connected server version=3.12.8}
2020-09-23 16:54:55.318  INFO 7716 --- [ient_1.cluster-] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is CLIENT_DISCONNECTED
2020-09-23 16:54:55.318  INFO 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to cluster with name: dev
2020-09-23 16:54:55.319  INFO 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 16:54:56.324  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Exception during initial connection to [10.140.127.248]:14571: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused: no further information to address /10.140.127.248:14571
2020-09-23 16:54:56.324  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Unable to get live cluster connection, retry in 851 ms, attempt 1, retry timeout millis 10000 cap
2020-09-23 16:54:57.177  INFO 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 16:54:57.199 ERROR 7716 --- [nio-8089-exec-5] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.hazelcast.client.HazelcastClientOfflineException: Client is offline.] with root cause


com.hazelcast.client.HazelcastClientOfflineException: Client is offline.
at com.hazelcast.client.connection.nio.DefaultClientConnectionStrategy.beforeGetConnection(DefaultClientConnectionStrategy.java:66) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl.checkAllowed(ClientConnectionManagerImpl.java:300) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl.getConnection(ClientConnectionManagerImpl.java:272) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl.getOrTriggerConnect(ClientConnectionManagerImpl.java:263) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.impl.SmartClientInvocationService.getOrTriggerConnect(SmartClientInvocationService.java:73) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.impl.SmartClientInvocationService.invokeOnRandomTarget(SmartClientInvocationService.java:58) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.impl.ClientInvocation.invokeOnSelection(ClientInvocation.java:167) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.impl.ClientInvocation.invoke(ClientInvocation.java:146) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.spi.ClientProxy.invoke(ClientProxy.java:251) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.hazelcast.client.proxy.ClientMapProxy.size(ClientMapProxy.java:1686) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.radial.rfil.XrefUIController.getAllXrefs(XrefUIController.java:31) ~[classes/:na]
at com.hazelcast.client.proxy.ClientMapProxy.size(ClientMapProxy.java:1686) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.radial.rfil.XrefUIController.getAllXrefs(XrefUIController.java:31) ~[classes/:na]
2020-09-23 16:54:58.188  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Exception during initial connection to [10.140.127.248]:14571: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused: no further information to address /10.140.127.248:14571
2020-09-23 16:54:58.189  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Unable to get live cluster connection, retry in 985 ms, attempt 2, retry timeout millis 10000 cap
2020-09-23 16:54:59.175  INFO 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 16:55:00.177  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Exception during initial connection to [10.140.127.248]:14571: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused: no further information to address /10.140.127.248:14571
2020-09-23 16:55:00.183  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Unable to get live cluster connection, retry in 876 ms, attempt 3, retry timeout millis 10000 cap
2020-09-23 16:55:01.064  INFO 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 16:55:02.077  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Exception during initial connection to [10.140.127.248]:14571: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused: no further information to address /10.140.127.248:14571
2020-09-23 16:55:02.077  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Unable to get live cluster connection, retry in 890 ms, attempt 4, retry timeout millis 10000 cap
2020-09-23 16:55:02.968  INFO 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 16:55:03.970  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Exception during initial connection to [10.140.127.248]:14571: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused: no further information to address /10.140.127.248:14571
2020-09-23 16:55:03.970  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Unable to get live cluster connection, retry in 818 ms, attempt 5, retry timeout millis 10000 cap
2020-09-23 16:55:04.789  INFO 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 16:55:05.792  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Exception during initial connection to [10.140.127.248]:14571: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused: no further information to address /10.140.127.248:14571
2020-09-23 16:55:05.792  WARN 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Unable to get live cluster connection, retry in 943 ms, attempt 6, retry timeout millis 10000 cap
2020-09-23 16:55:06.736  INFO 7716 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member










M. Sancar Koyunlu

unread,
Sep 23, 2020, 7:48:00 AM9/23/20
to Hazelcast
I just saw that in your code you are doing a call to map.size() 
Again, anything that will require information from the remote cluster will throw HazelcastClientOfflineException. A client can not know the size of the map without asking the cluster. 

Balamurugan Selvam

unread,
Sep 23, 2020, 8:35:54 AM9/23/20
to haze...@googlegroups.com
Understood,

Removed the unwanted map.size() in the code.



New Configuration -

 @Bean
public HazelcastInstance hazelcastInstance()
{
ClientConfig clientConfig = new ClientConfig();
NearCacheConfig clientNearCacheConfig = new NearCacheConfig("xref")
.setName("xref")

        .setInMemoryFormat(InMemoryFormat.OBJECT)
        .setInvalidateOnChange(false);
clientConfig.addNearCacheConfig(clientNearCacheConfig);
ClientNetworkConfig networkConfig = clientConfig.getNetworkConfig();
addressList = new ArrayList<>();
addressList.add("10.140.127.248:14571");
networkConfig.setAddresses(addressList);
networkConfig.addAddress(addressList.toArray(new String[addressList.size()]));
clientConfig.setNetworkConfig(networkConfig);
ClientConnectionStrategyConfig connectionStrategyConfig = clientConfig.getConnectionStrategyConfig();
connectionStrategyConfig.setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ASYNC);
ConnectionRetryConfig connectionRetryConfig = connectionStrategyConfig.getConnectionRetryConfig();
connectionRetryConfig.setInitialBackoffMillis(1000)
        .setMaxBackoffMillis(10000).setMultiplier(1).setJitter(0.2);
connectionRetryConfig.setFailOnMaxBackoff(false);
connectionRetryConfig.setEnabled(true);
clientConfig.setConnectionStrategyConfig(connectionStrategyConfig);
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
IMap<String, CrossRef> map = client.getMap("xref");
for (int i = 0; i < map.size(); i++) {
    map.put(Integer.toString(i), map.get(Integer.toString(i)));
}
return client;
}


Controller -

private HazelcastInstance hazelcastInstance;

@GetMapping("/xref/references")
    public String getAllXrefs(Model model) {
IMap<String,CrossRef> xref = hazelcastInstance.getMap("xref");
List<CrossRef> ref = new ArrayList<CrossRef>();
for(int i=0; i<xref.size(); i++) {
ref.add(xref.get(Integer.toString(i)));
}
 model.addAttribute("xrefs", ref);
       return "xref/list";
   
    }

Still getting an exception if I manually turned down the member node.. 



OpenJDK 64-Bit Server VM warning: Options -Xverify:none and -noverify were deprecated in JDK 13 and will likely be removed in a future release.

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.3.2.RELEASE)

2020-09-23 17:26:35.890  INFO 14756 --- [           main] .r.r.RfilHazelcasrCacheClientApplication : Starting RfilHazelcasrCacheClientApplication on CHE-46FWGH2 with PID 14756 (D:\RFILProject\SpringTools\GIT_SOURCECODE\hazelcast-client-working-test-application\target\classes started by bselvam in D:\RFILProject\SpringTools\GIT_SOURCECODE\hazelcast-client-working-test-application)
2020-09-23 17:26:35.892  INFO 14756 --- [           main] .r.r.RfilHazelcasrCacheClientApplication : No active profile set, falling back to default profiles: default
2020-09-23 17:26:36.573  WARN 14756 --- [           main] c.h.instance.HazelcastInstanceFactory    : Hazelcast is starting in a Java modular environment (Java 9 and newer) but without proper access to required Java packages. Use additional Java arguments to provide Hazelcast access to Java internal API. The internal API access is used to get the best performance results. Arguments to be used:

 --add-modules java.se --add-exports java.base/jdk.internal.ref=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.nio=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED --add-opens java.management/sun.management=ALL-UNNAMED --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
2020-09-23 17:26:36.586  INFO 14756 --- [           main] c.h.config.AbstractConfigLocator         : Loading 'hazelcast-default.xml' from the classpath.
2020-09-23 17:26:36.932  INFO 14756 --- [           main] com.hazelcast.instance.AddressPicker     : [LOCAL] [dev] [3.12.8] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2020-09-23 17:26:36.982  INFO 14756 --- [           main] com.hazelcast.instance.AddressPicker     : [LOCAL] [dev] [3.12.8] Picked [192.168.1.102]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
2020-09-23 17:26:36.993  INFO 14756 --- [           main] com.hazelcast.system                     : [192.168.1.102]:5701 [dev] [3.12.8] Hazelcast 3.12.8 (20200625 - 35a975e) starting at [192.168.1.102]:5701
2020-09-23 17:26:36.993  INFO 14756 --- [           main] com.hazelcast.system                     : [192.168.1.102]:5701 [dev] [3.12.8] Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.
2020-09-23 17:26:37.227  INFO 14756 --- [           main] c.h.s.i.o.impl.BackpressureRegulator     : [192.168.1.102]:5701 [dev] [3.12.8] Backpressure is disabled
2020-09-23 17:26:37.781  INFO 14756 --- [           main] com.hazelcast.instance.Node              : [192.168.1.102]:5701 [dev] [3.12.8] Creating MulticastJoiner
2020-09-23 17:26:37.907  INFO 14756 --- [           main] c.h.s.i.o.impl.OperationExecutorImpl     : [192.168.1.102]:5701 [dev] [3.12.8] Starting 4 partition threads and 3 generic threads (1 dedicated for priority tasks)
2020-09-23 17:26:37.909  INFO 14756 --- [           main] c.h.internal.diagnostics.Diagnostics     : [192.168.1.102]:5701 [dev] [3.12.8] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-09-23 17:26:37.920  INFO 14756 --- [           main] com.hazelcast.core.LifecycleService      : [192.168.1.102]:5701 [dev] [3.12.8] [192.168.1.102]:5701 is STARTING

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/C:/Users/bselvam/.m2/repository/com/hazelcast/hazelcast/3.12.8/hazelcast-3.12.8.jar) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2020-09-23 17:26:40.382  INFO 14756 --- [           main] c.h.internal.cluster.ClusterService      : [192.168.1.102]:5701 [dev] [3.12.8]

Members {size:1, ver:1} [
Member [192.168.1.102]:5701 - 9c485df1-34e6-4800-9327-6924f9ec2de3 this
]

2020-09-23 17:26:40.400  INFO 14756 --- [           main] com.hazelcast.core.LifecycleService      : [192.168.1.102]:5701 [dev] [3.12.8] [192.168.1.102]:5701 is STARTED
2020-09-23 17:26:40.798  INFO 14756 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8089 (http)
2020-09-23 17:26:40.810  INFO 14756 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2020-09-23 17:26:40.810  INFO 14756 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet engine: [Apache Tomcat/9.0.37]
2020-09-23 17:26:40.935  INFO 14756 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2020-09-23 17:26:40.935  INFO 14756 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 4998 ms
2020-09-23 17:26:40.973  INFO 14756 --- [           main] c.h.config.AbstractConfigLocator         : Loading 'hazelcast-default.xml' from the classpath.
2020-09-23 17:26:41.039  INFO 14756 --- [           main] com.hazelcast.instance.AddressPicker     : [LOCAL] [dev] [3.12.8] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2020-09-23 17:26:41.084  INFO 14756 --- [           main] com.hazelcast.instance.AddressPicker     : [LOCAL] [dev] [3.12.8] Picked [192.168.1.102]:5702, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5702], bind any local is true
2020-09-23 17:26:41.085  INFO 14756 --- [           main] com.hazelcast.system                     : [192.168.1.102]:5702 [dev] [3.12.8] Hazelcast 3.12.8 (20200625 - 35a975e) starting at [192.168.1.102]:5702
2020-09-23 17:26:41.085  INFO 14756 --- [           main] com.hazelcast.system                     : [192.168.1.102]:5702 [dev] [3.12.8] Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.
2020-09-23 17:26:41.087  INFO 14756 --- [           main] c.h.s.i.o.impl.BackpressureRegulator     : [192.168.1.102]:5702 [dev] [3.12.8] Backpressure is disabled
2020-09-23 17:26:41.124  INFO 14756 --- [           main] com.hazelcast.instance.Node              : [192.168.1.102]:5702 [dev] [3.12.8] Creating MulticastJoiner
2020-09-23 17:26:41.131  INFO 14756 --- [           main] c.h.s.i.o.impl.OperationExecutorImpl     : [192.168.1.102]:5702 [dev] [3.12.8] Starting 4 partition threads and 3 generic threads (1 dedicated for priority tasks)
2020-09-23 17:26:41.132  INFO 14756 --- [           main] c.h.internal.diagnostics.Diagnostics     : [192.168.1.102]:5702 [dev] [3.12.8] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-09-23 17:26:41.132  INFO 14756 --- [           main] com.hazelcast.core.LifecycleService      : [192.168.1.102]:5702 [dev] [3.12.8] [192.168.1.102]:5702 is STARTING
2020-09-23 17:26:43.538  INFO 14756 --- [           main] c.h.internal.cluster.ClusterService      : [192.168.1.102]:5702 [dev] [3.12.8]

Members {size:1, ver:1} [
Member [192.168.1.102]:5702 - ac404160-d697-458e-9a76-4e683edec99b this
]

2020-09-23 17:26:43.538  WARN 14756 --- [           main] com.hazelcast.instance.Node              : [192.168.1.102]:5702 [dev] [3.12.8] Config seed port is 5701 and cluster size is 1. Some of the ports seem occupied!
2020-09-23 17:26:43.539  INFO 14756 --- [           main] com.hazelcast.core.LifecycleService      : [192.168.1.102]:5702 [dev] [3.12.8] [192.168.1.102]:5702 is STARTED
2020-09-23 17:26:43.557  INFO 14756 --- [           main] com.hazelcast.client.HazelcastClient     : hz.client_1 [dev] [3.12.8] A non-empty group password is configured for the Hazelcast client. Starting with Hazelcast version 3.11, clients with the same group name, but with different group passwords (that do not use authentication) will be accepted to a cluster. The group password configuration will be removed completely in a future release.
2020-09-23 17:26:43.580  INFO 14756 --- [           main] c.h.client.spi.ClientInvocationService   : hz.client_1 [dev] [3.12.8] Running with 2 response threads, dynamic=false
2020-09-23 17:26:43.617  INFO 14756 --- [           main] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is STARTING
2020-09-23 17:26:43.618  INFO 14756 --- [           main] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is STARTED
2020-09-23 17:26:43.626  INFO 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to cluster with name: dev
2020-09-23 17:26:43.629  INFO 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 17:26:43.647  INFO 14756 --- [nt_1.internal-2] c.h.c.c.ClientConnectionManager          : hz.client_1 [dev] [3.12.8] Setting ClientConnection{alive=true, connectionId=1, channel=NioChannel{/10.140.127.248:64882->/10.140.127.248:14571}, remoteEndpoint=[10.140.127.248]:14571, lastReadTime=2020-09-23 17:26:43.644, lastWriteTime=2020-09-23 17:26:43.641, closedTime=never, connected server version=3.12.8} as owner with principal ClientPrincipal{uuid='ccb2a2a9-6b4c-4166-9001-9ff90506227d', ownerUuid='1807f0c4-0c97-4670-ad75-d87322c14847'}
2020-09-23 17:26:43.647  INFO 14756 --- [nt_1.internal-2] c.h.c.c.ClientConnectionManager          : hz.client_1 [dev] [3.12.8] Authenticated with server [10.140.127.248]:14571, server version:3.12.8 Local address: /10.140.127.248:64882
2020-09-23 17:26:43.653  INFO 14756 --- [ient_1.event-14] c.h.c.spi.impl.ClientMembershipListener  : hz.client_1 [dev] [3.12.8]

Members [1] {
Member [10.140.127.248]:14571 - 1807f0c4-0c97-4670-ad75-d87322c14847
}

2020-09-23 17:26:43.654  INFO 14756 --- [ient_1.cluster-] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is CLIENT_CONNECTED
2020-09-23 17:26:43.655  INFO 14756 --- [           main] c.h.internal.diagnostics.Diagnostics     : hz.client_1 [dev] [3.12.8] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-09-23 17:26:45.608  INFO 14756 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2020-09-23 17:26:45.797  INFO 14756 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8089 (http) with context path ''
2020-09-23 17:26:45.805  INFO 14756 --- [           main] .r.r.RfilHazelcasrCacheClientApplication : Started RfilHazelcasrCacheClientApplication in 10.252 seconds (JVM running for 10.975)
2020-09-23 17:27:10.305  INFO 14756 --- [nio-8089-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
2020-09-23 17:27:10.305  INFO 14756 --- [nio-8089-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2020-09-23 17:27:10.313  INFO 14756 --- [nio-8089-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 8 ms
2020-09-23 17:27:33.493  INFO 14756 --- [.IO.thread-in-0] c.h.c.connection.nio.ClientConnection    : hz.client_1 [dev] [3.12.8] ClientConnection{alive=false, connectionId=1, channel=NioChannel{/10.140.127.248:64882->/10.140.127.248:14571}, remoteEndpoint=[10.140.127.248]:14571, lastReadTime=2020-09-23 17:27:33.492, lastWriteTime=2020-09-23 17:27:32.431, closedTime=2020-09-23 17:27:33.493, connected server version=3.12.8} closed. Reason: Connection closed by the other side
2020-09-23 17:27:33.496  INFO 14756 --- [.IO.thread-in-0] c.h.c.c.ClientConnectionManager          : hz.client_1 [dev] [3.12.8] Removed connection to endpoint: [10.140.127.248]:14571, connection: ClientConnection{alive=false, connectionId=1, channel=NioChannel{/10.140.127.248:64882->/10.140.127.248:14571}, remoteEndpoint=[10.140.127.248]:14571, lastReadTime=2020-09-23 17:27:33.492, lastWriteTime=2020-09-23 17:27:32.431, closedTime=2020-09-23 17:27:33.493, connected server version=3.12.8}
2020-09-23 17:27:33.497  INFO 14756 --- [ient_1.cluster-] com.hazelcast.core.LifecycleService      : hz.client_1 [dev] [3.12.8] HazelcastClient 3.12.8 (20200625 - 35a975e) is CLIENT_DISCONNECTED
2020-09-23 17:27:33.498  INFO 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to cluster with name: dev
2020-09-23 17:27:33.498  INFO 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 17:27:34.503  WARN 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Exception during initial connection to [10.140.127.248]:14571: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused: no further information to address /10.140.127.248:14571
2020-09-23 17:27:34.503  WARN 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Unable to get live cluster connection, retry in 843 ms, attempt 1, retry timeout millis 10000 cap
2020-09-23 17:27:34.801 ERROR 14756 --- [nio-8089-exec-6] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.hazelcast.client.HazelcastClientOfflineException: Client is offline.] with root cause
2020-09-23 17:27:35.347  INFO 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 17:27:36.349  WARN 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Exception during initial connection to [10.140.127.248]:14571: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused: no further information to address /10.140.127.248:14571
2020-09-23 17:27:36.350  WARN 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Unable to get live cluster connection, retry in 874 ms, attempt 2, retry timeout millis 10000 cap
2020-09-23 17:27:37.225  INFO 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 17:27:38.228  WARN 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Exception during initial connection to [10.140.127.248]:14571: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused: no further information to address /10.140.127.248:14571
2020-09-23 17:27:38.229  WARN 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Unable to get live cluster connection, retry in 930 ms, attempt 3, retry timeout millis 10000 cap
2020-09-23 17:27:39.161  INFO 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member
2020-09-23 17:27:40.162  WARN 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Exception during initial connection to [10.140.127.248]:14571: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused: no further information to address /10.140.127.248:14571
2020-09-23 17:27:40.163  WARN 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Unable to get live cluster connection, retry in 829 ms, attempt 4, retry timeout millis 10000 cap
2020-09-23 17:27:40.993  INFO 14756 --- [ient_1.cluster-] c.h.c.c.nio.ClusterConnectorService      : hz.client_1 [dev] [3.12.8] Trying to connect to [10.140.127.248]:14571 as owner member

M. Sancar Koyunlu

unread,
Sep 23, 2020, 8:49:41 AM9/23/20
to Hazelcast
You are getting the exception from `getAllXrefs` method in your controller. 
```
for(int i=0; i<xref.size(); i++) { <<<<-===== HERE
ref.add(xref.get(Integer.toString(i)));
}
```
And this is the stack trace that it is showing map.size is called.
```

at com.hazelcast.client.proxy.ClientMapProxy.size(ClientMapProxy.java:1686) ~[hazelcast-client-3.12.8.jar:3.12.8]
at com.radial.rfil.XrefUIController.getAllXrefs(XrefUIController.java:31) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:564) ~[na:na]
```



Balamurugan Selvam

unread,
Sep 23, 2020, 9:25:40 AM9/23/20
to haze...@googlegroups.com
Oops, Is there any other way to get all the values without hitting the cluster ?
I need all the values to be shown in an UI, even though the member node is down, 
 
can you please help me on that.


Thanks,
Balamurugan Selvam

Balamurugan Selvam

unread,
Sep 24, 2020, 2:32:57 AM9/24/20
to haze...@googlegroups.com
Hi Sancar,

Many thanks for your help on resolving this issue. I am able to get the near cache values when member nodes are down.

As you suggested I removed the .size() and it works as expected.

Thanks,
Balamurugan Selvam
Reply all
Reply to author
Forward
0 new messages