Hazelcast not working for single node/non-clustered environment - 6.5.8

299 views
Skip to first unread message

Morning Star (vidivelli)

unread,
Oct 7, 2022, 1:07:41 AM10/7/22
to CAS Community
Hi all,

I am working on CAS server upgrade from 6.3.7.4 to 6.5.8 version. 
My clustered environment is working with 3 nodes without any issue.
But, for lower environment with single node, its not working. I am getting error on server startup.

Can someone please help?
Do we have a way to disable Hazelcast for Single node?

2022-10-06 08:27:31 [DEBUG] org.apereo.cas.ticket.expiration.builder.TicketGrantingTicketExpirationPolicyBuilder  Remember me expiration policy is being configured based on hard timeout of [28800] seconds
2022-10-06 08:27:31 [DEBUG] org.apereo.cas.ticket.expiration.builder.TicketGrantingTicketExpirationPolicyBuilder  Ticket-granting ticket expiration policy is based on a timeout of [28800] seconds
2022-10-06 08:27:31 [DEBUG] org.apereo.cas.ticket.expiration.builder.TicketGrantingTicketExpirationPolicyBuilder  Final effective time-to-live of remember-me expiration policy is [9223372036854775807] seconds
2022-10-06 08:27:33 [WARN] com.hazelcast.cp.CPSubsystem  [10.34.abc.55]:5706 [dev] [5.0.2] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
2022-10-06 08:27:34 [ERROR] com.hazelcast.security  [10.34.abc.55]:5706 [dev] [5.0.2] Node could not join cluster. Before join check failed node is going to shutdown now!
2022-10-06 08:27:34 [ERROR] com.hazelcast.security  [10.34.abc.55]:5706 [dev] [5.0.2] Reason of failure for node join: Joining node's version 5.0.2 is not compatible with cluster version 4.0 (Rolling Member Upgrades are only supported for the same major version) (Rolling Member Upgrades are only supported in Hazelcast Enterprise)
2022-10-06 08:27:34 [WARN] com.hazelcast.instance.impl.Node  [10.34.abc.55]:5706 [dev] [5.0.2] Terminating forcefully...
2022-10-06 08:27:35 [ERROR] com.hazelcast.internal.cluster.impl.TcpIpJoiner  [10.34.abc.55]:5706 [dev] [5.0.2] Hazelcast instance is not active!
com.hazelcast.core.HazelcastInstanceNotActiveException: Hazelcast instance is not active!
    at com.hazelcast.instance.impl.DefaultNodeExtension$1.get(DefaultNodeExtension.java:344) ~[hazelcast-5.0.2.jar:5.0.2]
    at com.hazelcast.instance.impl.DefaultNodeExtension$1.get(DefaultNodeExtension.java:341) ~[hazelcast-5.0.2.jar:5.0.2]
    at com.hazelcast.internal.serialization.impl.AbstractSerializationService.serializerFor(AbstractSerializationService.java:545) ~[hazelcast-5.0.2.jar:5.0.2]
    at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toBytes(AbstractSerializationService.java:227) ~[hazelcast-5.0.2.jar:5.0.2]
    at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toBytes(AbstractSerializationService.java:214) ~[hazelcast-5.0.2.jar:5.0.2]
    at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toBytes(AbstractSerializationService.java:205) ~[hazelcast-5.0.2.jar:5.0.2]
    at com.hazelcast.spi.impl.operationservice.impl.OutboundOperationHandler.toPacket(OutboundOperationHandler.java:63) ~[hazelcast-5.0.2.jar:5.0.2]
    at com.hazelcast.spi.impl.operationservice.impl.OutboundOperationHandler.send(OutboundOperationHandler.java:54) ~[hazelcast-5.0.2.jar:5.0.2]

cas.properties:
cas.ticket.registry.hazelcast.cluster.network.members=10.34.abc.55(tried giving localhost, 127.0.0.1 here)
cas.ticket.registry.hazelcast.cluster.core.instance-name=10.34.abc.55 (tried giving localhost, 127.0.0.1 here)
cas.ticket.registry.hazelcast.cluster.network.port=5704
cas.ticket.registry.hazelcast.core.enable-compression=false
cas.ticket.registry.hazelcast.cluster.core.asyncbackup-count=0
cas.ticket.registry.hazelcast.cluster.core.backup-count=1
cas.ticket.registry.hazelcast.cluster.network.port-auto-increment=true
cas.ticket.registry.hazelcast.cluster.tcpip-enabled=false
cas.ticket.registry.hazelcast.cluster.multicast.enabled=false

Regards,
Anusuya.

Stef

unread,
Oct 7, 2022, 3:24:46 AM10/7/22
to cas-...@apereo.org
Hi,

It looks like your single node cluster can see your multi node cluster and refuse to connect to it because they are not at same version of hazelcast.
I think you have a mistake in your properties because you're missing the discovery :

cas.ticket.registry.hazelcast.cluster.discovery.multicast.enabled=false 

Regards

Stéphane

--
- Website: https://apereo.github.io/cas
- Gitter Chatroom: https://gitter.im/apereo/cas
- List Guidelines: https://goo.gl/1VRrw7
- Contributions: https://goo.gl/mh7qDG
---
You received this message because you are subscribed to the Google Groups "CAS Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cas-user+u...@apereo.org.
To view this discussion on the web visit https://groups.google.com/a/apereo.org/d/msgid/cas-user/a9e61c41-92f5-4a43-a25c-7692fff1d898n%40apereo.org.

Morning Star (vidivelli)

unread,
Oct 7, 2022, 3:48:13 AM10/7/22
to CAS Community, Stéphane Delcourt
Hi Stephane,

Thanks for your quick response.

After modifying cas.properties like below, Connection created successfully.But it is getting closed and server is not starting.
CAS 6.5.8 Properties
cas.ticket.registry.hazelcast.cluster.network.members=localhost
cas.ticket.registry.hazelcast.cluster.core.instance-name=localhost
cas.ticket.registry.hazelcast.cluster.network.port=5701

cas.ticket.registry.hazelcast.core.enable-compression=false
cas.ticket.registry.hazelcast.cluster.core.asyncbackup-count=0
cas.ticket.registry.hazelcast.cluster.core.backup-count=1
cas.ticket.registry.hazelcast.cluster.network.port-auto-increment=true
cas.ticket.registry.hazelcast.cluster.tcpip-enabled=false
cas.ticket.registry.hazelcast.cluster.core.cp-member-count=1
cas.ticket.registry.hazelcast.cluster.discovery.multicast.enabled=false

2022-10-07 00:34:42 [INFO] org.ldaptive.transport.netty.NettyConnection  Closed connection org.ldaptive.transport.netty.NettyConnection@2108784804::ldapUrl=[org.ldaptive.LdapURL@1977865256::scheme=ldaps, hostname=test-cp-ldap.int.abc.com, port=3041, baseDn=null, attributes=null, scope=null, filter=null, inetAddress=null], isOpen=true, connectTime=2022-10-07T07:34:40.777203Z, connectionConfig=[org.ldaptive.ConnectionConfig@1469638133::ldapUrl=ldaps://test-cp-ldap.int.abc.com, connectTimeout=PT1H23M20S, responseTimeout=PT5S, reconnectTimeout=PT2M, autoReconnect=true, autoReconnectCondition=org.ldaptive.ConnectionConfig$$Lambda$1482/0x0000000100b67c40@657960f8, autoReplay=true, sslConfig=[org.ldaptive.ssl.SslConfig@1950858236::credentialConfig=null, trustManagers=null, hostnameVerifier=org.ldaptive.ssl.DefaultHostnameVerifier@5016da36, enabledCipherSuites=null, enabledProtocols=null, handshakeCompletedListeners=null, handshakeTimeout=PT1M], useStartTLS=false, connectionInitializers=[org.ldaptive.BindConnectionInitializer@1142803124::bindDn=uid=portlet,dc=Consumer,dc=mercuryinsurance,dc=com, bindSaslConfig=null, bindControls=null], connectionStrategy=org.ldaptive.ActivePassiveConnectionStrategy@39b3cb7a, connectionValidator=null, transportOptions={}], channel=[id: 0x646d8df8, L:/10.34.abc.55:57042 - R:test-cp-ldap.int.abc.com/10.34.abc.232:3041]


[INFO] org.ldaptive.PooledConnectionFactory  pool closed [org.ldaptive.PooledConnectionFactory@680054033::name=null, minPoolSize=3, maxPoolSize=10, validateOnCheckIn=false, validateOnCheckOut=true, validatePeriodically=true, activator=org.ldaptive.pool.AbstractConnectionPool$$Lambda$1486/0x0000000100b66c40@524c61a0, passivator=[org.ldaptive.pool.BindConnectionPassivator@1157543247::bindRequest=org.ldaptive.SimpleBindRequest@901491156::controls=null, dn=uid=portlet,dc=Consumer,dc=mercuryinsurance,dc=com], validator=[org.ldaptive.SearchConnectionValidator@1538595383::validatePeriod=PT10M, validateTimeout=PT5S, searchRequest=org.ldaptive.SearchRequest@-1109052076::controls=null, dn=, scope=OBJECT, aliases=NEVER, sizeLimit=1, timeLimit=PT0S, typesOnly=false, filter=org.ldaptive.filter.PresenceFilter@b262ac96, returnAttributes=[1.1], binaryAttributes=null], pruneStrategy=[org.ldaptive.pool.IdlePruneStrategy@1138731323::prunePeriod=PT1H23M20S, idleTime=PT1H23M20S], connectOnCreate=true, connectionFactory=[org.ldaptive.DefaultConnectionFactory@1916414146::transport=[org.ldaptive.transport.netty.ConnectionFactoryTransport@447208024::channelType=class io.netty.channel.epoll.EpollSocketChannel, ioWorkerGroup=io.netty.channel.epoll.EpollEventLoopGroup@69301f99, messageWorkerGroup=null, shutdownOnClose=true], config=[org.ldaptive.ConnectionConfig@1469638133::ldapUrl=ldaps://test-cp-ldap.int.abc.com, connectTimeout=PT1H23M20S, responseTimeout=PT5S, reconnectTimeout=PT2M, autoReconnect=true, autoReconnectCondition=org.ldaptive.ConnectionConfig$$Lambda$1482/0x0000000100b67c40@657960f8, autoReplay=true, sslConfig=[org.ldaptive.ssl.SslConfig@1950858236::credentialConfig=null, trustManagers=null, hostnameVerifier=org.ldaptive.ssl.DefaultHostnameVerifier@5016da36, enabledCipherSuites=null, enabledProtocols=null, handshakeCompletedListeners=null, handshakeTimeout=PT1M], useStartTLS=false, connectionInitializers=[org.ldaptive.BindConnectionInitializer@1142803124::bindDn=uid=portlet,dc=Consumer,dc=mercuryinsurance,dc=com, bindSaslConfig=null, bindControls=null], connectionStrategy=org.ldaptive.ActivePassiveConnectionStrategy@39b3cb7a, connectionValidator=null, transportOptions={}]], failFastInitialize=false, initialized=true, availableCount=0, activeCount=0, blockWaitTime=PT1H23M20S]

Regards,
Anusuya.

Ray Bon

unread,
Oct 7, 2022, 11:44:57 AM10/7/22
to cas-...@apereo.org, the...@gmail.com
Anusuya,

ldaptive is for your ldap connection. Check those settings.

Ray

On Fri, 2022-10-07 at 00:48 -0700, Morning Star (vidivelli) wrote:
Notice: This message was sent from outside the University of Victoria email system. Please be cautious with links and sensitive information.

Ray Bon

unread,
Jan 22, 2024, 2:57:27 PMJan 22
to sathish...@gmail.com, cas-...@apereo.org, the...@gmail.com
Sathish,

Is this property set to the ip or name of the single node?
cas.ticket.registry.hazelcast.cluster.network.members=

Could there be other applications (not cas) running on the node that run hazelcast?

Find out why there is a cluster version of 4.1.

Ray

On Mon, 2024-01-22 at 10:13 -0800, Sathish Sekar wrote:
Notice: This message was sent from outside the University of Victoria email system. Please be cautious with links and sensitive information.

Hi team,

I'm facing the following issue. It is making server to stop. Kindly do needful.
2024-01-19 14:57:56 [ERROR] com.hazelcast.security  [101.34.202.94]:5701 [dev] [5.1.7] Node could not join cluster. Before join check failed node is going to shutdown now!
2024-01-19 14:57:56 [ERROR] com.hazelcast.security  [101.34.202.94]:5701 [dev] [5.1.7] Reason of failure for node join: Joining node's version 5.1.7 is not compatible with cluster version 4.1 (Rolling Member Upgrades are only supported for the same major version) (Rolling Member Upgrades are only supported in Hazelcast Enterprise)
2024-01-19 14:57:56 [WARN] com.hazelcast.instance.impl.Node  [101.34.202.94]:5701 [dev] [5.1.7] Terminating forcefully...
2024-01-19 14:57:56 [ERROR] com.hazelcast.instance.impl.Node  [101.34.202.94]:5701 [dev] [5.1.7] Could not join cluster. Shutting down now!
2024-01-19 14:57:57 [WARN] com.hazelcast.internal.util.phonehome.PhoneHome  [101.34.202.94]:5701 [dev] [5.1.7] Could not schedule phone home task! Most probably Hazelcast failed to start.

Sathish Sekar

unread,
Jan 22, 2024, 2:57:27 PMJan 22
to CAS Community, Ray Bon, the...@gmail.com
Hi team,

I'm facing the following issue. It is making server to stop. Kindly do needful.
2024-01-19 14:57:56 [ERROR] com.hazelcast.security  [101.34.202.94]:5701 [dev] [5.1.7] Node could not join cluster. Before join check failed node is going to shutdown now!
2024-01-19 14:57:56 [ERROR] com.hazelcast.security  [101.34.202.94]:5701 [dev] [5.1.7] Reason of failure for node join: Joining node's version 5.1.7 is not compatible with cluster version 4.1 (Rolling Member Upgrades are only supported for the same major version) (Rolling Member Upgrades are only supported in Hazelcast Enterprise)
2024-01-19 14:57:56 [WARN] com.hazelcast.instance.impl.Node  [101.34.202.94]:5701 [dev] [5.1.7] Terminating forcefully...
2024-01-19 14:57:56 [ERROR] com.hazelcast.instance.impl.Node  [101.34.202.94]:5701 [dev] [5.1.7] Could not join cluster. Shutting down now!
2024-01-19 14:57:57 [WARN] com.hazelcast.internal.util.phonehome.PhoneHome  [101.34.202.94]:5701 [dev] [5.1.7] Could not schedule phone home task! Most probably Hazelcast failed to start.

On Friday, October 7, 2022 at 9:14:57 PM UTC+5:30 Ray Bon wrote:
Reply all
Reply to author
Forward
0 new messages