Getting Infinispan errors while upgrading to keycloak 12.0.1 from 11.0.2

456 views
Skip to first unread message

Dulanjali Adhikari

unread,
Dec 22, 2020, 1:26:18 AM12/22/20
to Keycloak User
Hi All,

After release of keycloak 12.0.1, we upgraded keycloak version from 11.0.2 to 12.0.1. We have used helm chart for keycloak installation so as for upgrade.

But we are getting errors from both master and worker nodes.

master pod errors:
2020-12-22 15:17:18.980 DDUT
[0m [0m05:17:18,979 INFO [org.infinispan.CLUSTER] (thread-106,ejb,keycloak-0) ISPN100000: Node keycloak-1 joined the cluster
Info
2020-12-22 15:17:18.980 DDUT
[0m [0m05:17:18,980 INFO [org.infinispan.CLUSTER] (thread-106,ejb,keycloak-0) ISPN000094: Received new cluster view for channel ejb: [keycloak-0|3] (2) [keycloak-0, keycloak-1]
Info
2020-12-22 15:17:18.983 DDUT
[0m [0m05:17:18,983 INFO [org.infinispan.CLUSTER] (thread-106,ejb,keycloak-0) ISPN100000: Node keycloak-1 joined the cluster
Info
2020-12-22 15:17:18.985 DDUT
[0m [0m05:17:18,984 INFO [org.infinispan.CLUSTER] (thread-106,ejb,keycloak-0) ISPN000094: Received new cluster view for channel ejb: [keycloak-0|3] (2) [keycloak-0, keycloak-1]
Info
2020-12-22 15:17:18.985 DDUT
[0m [0m05:17:18,985 INFO [org.infinispan.CLUSTER] (thread-106,ejb,keycloak-0) ISPN100000: Node keycloak-1 joined the cluster
Info
2020-12-22 15:17:18.985 DDUT
[0m [0m05:17:18,985 INFO [org.infinispan.CLUSTER] (thread-106,ejb,keycloak-0) ISPN000094: Received new cluster view for channel ejb: [keycloak-0|3] (2) [keycloak-0, keycloak-1]
Info
2020-12-22 15:17:18.985 DDUT
[0m [0m05:17:18,985 INFO [org.infinispan.CLUSTER] (thread-106,ejb,keycloak-0) ISPN100000: Node keycloak-1 joined the cluster
Info
2020-12-22 15:17:18.986 DDUT
[0m [0m05:17:18,985 INFO [org.infinispan.CLUSTER] (thread-106,ejb,keycloak-0) ISPN000094: Received new cluster view for channel ejb: [keycloak-0|3] (2) [keycloak-0, keycloak-1]
Info
2020-12-22 15:17:18.986 DDUT
[0m [0m05:17:18,986 INFO [org.infinispan.CLUSTER] (thread-106,ejb,keycloak-0) ISPN100000: Node keycloak-1 joined the cluster
Info
2020-12-22 15:17:21.693 DDUT
[0m [31m05:17:21,692 ERROR [org.infinispan.CLUSTER] (thread-108,ejb,keycloak-0) ISPN000474: Error processing request 1@keycloak-1: org.infinispan.commons.CacheException: Unknown command id 17! at org.infinispan.commons.CacheException: Unknown command id 17! at org.inf...@11.0.4.Final//org.infinispan.commands.RemoteCommandsFactory.fromStream(RemoteCommandsFactory.java:264) at org.inf...@11.0.4.Final//org.infinispan.marshall.exts.ReplicableCommandExternalizer.readCommandHeader(ReplicableCommandExternalizer.java:110) at org.inf...@11.0.4.Final//org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:102) at org.inf...@11.0.4.Final//org.infinispan.marshall.exts.ReplicableCommandExternalizer.readObject(ReplicableCommandExternalizer.java:65) at org.inf...@11.0.4.Final//org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728) at org.inf...@11.0.4.Final//org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709) at org.inf...@11.0.4.Final//org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358) at org.inf...@11.0.4.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192) at org.inf...@11.0.4.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221) at org.inf...@11.0.4.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1362) at org.inf...@11.0.4.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1302) at org.inf...@11.0.4.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131) at org.inf...@11.0.4.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1445) at org.j...@4.2.5.Final//org.jgroups.JChannel.up(JChannel.java:784) at org.j...@4.2.5.Final//org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:135) at org.j...@4.2.5.Final//org.jgroups.stack.Protocol.up(Protocol.java:306) at org.j...@4.2.5.Final//org.jgroups.protocols.FORK.up(FORK.java:142) at org.j...@4.2.5.Final//org.jgroups.protocols.FRAG3.up(FRAG3.java:165) at org.j...@4.2.5.Final//org.jgroups.protocols.FlowControl.up(FlowControl.java:351) at org.j...@4.2.5.Final//org.jgroups.protocols.pbcast.GMS.up(GMS.java:868) at org.j...@4.2.5.Final//org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243) at org.j...@4.2.5.Final//org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049) at org.j...@4.2.5.Final//org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772) at org.j...@4.2.5.Final//org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753) at org.j...@4.2.5.Final//org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405) at org.j...@4.2.5.Final//org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592) at org.j...@4.2.5.Final//org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132) at org.j...@4.2.5.Final//org.jgroups.protocols.FailureDetection.up(FailureDetection.java:186) at org.j...@4.2.5.Final//org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254) at org.j...@4.2.5.Final//org.jgroups.protocols.MERGE3.up(MERGE3.java:281) at org.j...@4.2.5.Final//org.jgroups.protocols.Discovery.up(Discovery.java:300) at org.j...@4.2.5.Final//org.jgroups.protocols.TP.passMessageUp(TP.java:1385) at org.j...@4.2.5.Final//org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at org.jboss.as.cl...@21.0.1.Final//org.jboss.as.clustering.context.ContextReferenceExecutor.execute(ContextReferenceExecutor.java:49) at org.jboss.as.cl...@21.0.1.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70) at java.base/java.lang.Thread.run(Thread.java:834)

worker pod errors:
[org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-2) ISPN000329: Unable to read rebalancing status from coordinator keycloak-0: java.util.concurrent.CompletionException: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from keycloak-0, see cause for remote stack trace at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) at java.base/java.util.concurrent.CompletableFuture.uniApplyNow(CompletableFuture.java:670) at java.base/java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:658) at java.base/java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:2094) at java.base/java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:143) at org.inf...@10.1.8.Final//org.infinispan.topology.TopologyManagementHelper.executeOnCoordinator(TopologyManagementHelper.java:102) at org.inf...@10.1.8.Final//org.infinispan.topology.ClusterTopologyManagerImpl.fetchRebalancingStatusFromCoordinator(ClusterTopologyManagerImpl.java:160) at org.inf...@10.1.8.Final//org.infinispan.topology.ClusterTopologyManagerImpl.start(ClusterTopologyManagerImpl.java:149) at org.inf...@10.1.8.Final//org.infinispan.topology.CorePackageImpl$4.start(CorePackageImpl.java:87) at org.inf...@10.1.8.Final//org.infinispan.topology.CorePackageImpl$4.start(CorePackageImpl.java:71) at org.inf...@10.1.8.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.invokeStart(BasicComponentRegistryImpl.java:587) at org.inf...@10.1.8.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:578) at org.inf...@10.1.8.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:547) at org.inf...@10.1.8.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl.access$700(BasicComponentRegistryImpl.java:30) at org.inf...@10.1.8.Final//org.infinispan.factories.impl.BasicComponentRegistryImpl$ComponentWrapper.running(BasicComponentRegistryImpl.java:770) at org.inf...@10.1.8.Final//org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:341) at org.inf...@10.1.8.Final//org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:237) at org.inf...@10.1.8.Final//org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:755) at org.inf...@10.1.8.Final//org.infinispan.manager.DefaultCacheManager.start(DefaultCacheManager.java:726) at org.jboss.as.clus...@20.0.1.Final//org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.get(CacheContainerServiceConfigurator.java:120) at org.jboss.as.clus...@20.0.1.Final//org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.get(CacheContainerServiceConfigurator.java:74) at org.wildfly.clu...@20.0.1.Final//org.wildfly.clustering.service.FunctionalService.start(FunctionalService.java:63) at org.jb...@1.4.11.Final//org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1739) at org.jb...@1.4.11.Final//org.jboss.msc.service.ServiceControllerImpl$StartTask.execute(ServiceControllerImpl.java:1701) at org.jb...@1.4.11.Final//org.jboss.msc.service.ServiceControllerImpl$ControllerTask.run(ServiceControllerImpl.java:1559) at org.jbos...@2.3.3.Final//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) at org.jbos...@2.3.3.Final//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982) at org.jbos...@2.3.3.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486) at org.jbos...@2.3.3.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1363) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from keycloak-0, see cause for remote stack trace at org.inf...@10.1.8.Final//org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25) at org.inf...@10.1.8.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.withException(ValidSingleResponseCollector.java:37) at org.inf...@10.1.8.Final//org.infinispan.remoting.transport.ValidSingleResponseCollector.addResponse(ValidSingleResponseCollector.java:21) at org.inf...@10.1.8.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.addResponse(SingleTargetRequest.java:72) at org.inf...@10.1.8.Final//org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:42) at org.inf...@10.1.8.Final//org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52) at org.inf...@10.1.8.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1435) at org.inf...@10.1.8.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1338) at org.inf...@10.1.8.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:132) at org.inf...@10.1.8.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1487) at org.j...@4.2.4.Final//org.jgroups.JChannel.up(JChannel.java:784) at org.j...@4.2.4.Final//org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:135) at org.j...@4.2.4.Final//org.jgroups.stack.Protocol.up(Protocol.java:306) at org.j...@4.2.4.Final//org.jgroups.protocols.FORK.up(FORK.java:142) at org.j...@4.2.4.Final//org.jgroups.protocols.FRAG3.up(FRAG3.java:165) at org.j...@4.2.4.Final//org.jgroups.protocols.FlowControl.up(FlowControl.java:343) at org.j...@4.2.4.Final//org.jgroups.protocols.pbcast.GMS.up(GMS.java:868) at org.j...@4.2.4.Final//org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243) at org.j...@4.2.4.Final//org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049) at org.j...@4.2.4.Final//org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772) at org.j...@4.2.4.Final//org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753) at org.j...@4.2.4.Final//org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405) at org.j...@4.2.4.Final//org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592) at org.j...@4.2.4.Final//org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132) at org.j...@4.2.4.Final//org.jgroups.protocols.FailureDetection.up(FailureDetection.java:186) at org.j...@4.2.4.Final//org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254) at org.j...@4.2.4.Final//org.jgroups.protocols.MERGE3.up(MERGE3.java:281) at org.j...@4.2.4.Final//org.jgroups.protocols.Discovery.up(Discovery.java:300) at org.j...@4.2.4.Final//org.jgroups.protocols.TP.passMessageUp(TP.java:1385) at org.j...@4.2.4.Final//org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at org.jboss.as.cl...@20.0.1.Final//org.jboss.as.clustering.context.ContextReferenceExecutor.execute(ContextReferenceExecutor.java:49) at org.jboss.as.cl...@20.0.1.Final//org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70) ... 1 more Caused by: org.infinispan.commons.CacheException: Unknown command id 17! at org.inf...@10.1.8.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:181) at org.inf...@10.1.8.Final//org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42) at org.inf...@10.1.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728) at org.inf...@10.1.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709) at org.inf...@10.1.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358) at org.inf...@10.1.8.Final//org.infinispan.marshall.core.BytesObjectInput.readObject(BytesObjectInput.java:32) at org.inf...@10.1.8.Final//org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:49) at org.inf...@10.1.8.Final//org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:41) at org.inf...@10.1.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728) at org.inf...@10.1.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709) at org.inf...@10.1.8.Final//org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358) at org.inf...@10.1.8.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192) at org.inf...@10.1.8.Final//org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221) at org.inf...@10.1.8.Final//org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1427)

Would be great if we can know what is happening as we are planning to release to prod soon.

unread,
Jan 8, 2021, 6:53:24 AM1/8/21
to Keycloak User
I'm experiencing the same issue, and i figured out that it happend when i'm already running a keycloak v11 on the same docker host. I reproduced this on docker 18.03 and 19.03 (both running on ubuntu 18.04).

Steps to reproduce :

 > docker run -d --rm jboss/keycloak:11.0.3
 > # wait for keycloak to be ready
 > docker run -ti --rm jboss/keycloak:12.0.1 # this one will have the error

No issue when attaching the container to an other network :
> docker network create keycloak_net
> docker run -ti --rm --network keycloak_net jboss/keycloak:12.0.1

An other workarround is to change the default discovery protocol, for example :
> docker run -ti --rm -e JGROUPS_DISCOVERY_PROTOCOL=PING jboss/keycloak:12.0.1

I don't have this issue when i run multiple keycloak v12 containers on the same docker host

Mark Phippard

unread,
Jan 8, 2021, 1:34:10 PM1/8/21
to Keycloak User
I also had this problem and was also using the Helm chart.

The workaround I found in my test environment was if I deleted the keycloak-0 pod that was still running version 11.0.2 then Kubernetes would launch a new 12.0.1 pod and both pods would start. Of course this means there was a roughly 30-second outage where there was no Keycloak available.  In my case, I could live with this but I would also like to understand a better solution in the long term.

Mark

Dulanjali Adhikari

unread,
Jan 8, 2021, 6:04:26 PM1/8/21
to Keycloak User
Yes we also able to upgrade to keycloak 12.0.1 with down time after scale down keycloak pods.
It is better to understand why we are getting this error(seems to be related to cache) also as mark mentioned above it is good to know better solution for the long term.

Gaurav Bhorkar

unread,
Jan 27, 2021, 10:30:11 AM1/27/21
to Keycloak User
Facing the same error when there is already a Keycloak 11 instance and another Keycloak 12 instance comes up. Can someone help identify what is going on under the hood?

benjam...@gmail.com

unread,
Feb 1, 2021, 12:34:45 PM2/1/21
to Keycloak User
+1 Ran into the same issue as well from 11.0.3 to 12.0.2. Looks like Infinispan has been upgraded from 10 to 11, which is probably the culprit. Getting inspration from the server doc (https://www.keycloak.org/docs/latest/server_installation/), the Cross Datacenter Replication Mode section (the doc still references the old 9.x Infinispan) I wonder if changing the cluster name (ie new cache) or changing the (Hot Rod) protocolVersion property within the standalone-ha.xml might help...

benjam...@gmail.com

unread,
Feb 2, 2021, 8:51:40 AM2/2/21
to Keycloak User
I have taken a look further into the 10.1.x and 11.0.x branches of Infinispan source code, trying to find the IDs referenced by the "Unknown command" lines. From the OP of this thread the ID was 17. From mine the IDs were 90 and 85. The key term searched for was declaration of "COMMAND_ID". The following are what I found:

11.0.x branch:
./core/src/main/java/org/infinispan/commands/topology/RebalanceStatusRequestCommand.java:   public static final byte COMMAND_ID = 90;
./core/src/main/java/org/infinispan/commands/topology/CacheJoinCommand.java:   public static final byte COMMAND_ID = 85;
./core/src/main/java/org/infinispan/commands/irac/IracClearKeysCommand.java:   public static final byte COMMAND_ID = 17;

10.1.x branch:
./core/src/main/java/org/infinispan/topology/CacheTopologyControlCommand.java:   public static final byte COMMAND_ID = 17;
(none found for COMMAND_ID = 85 or 90). 

For Command ID 17, reading between IracClearKeysCommand.java (11.0.x) and CacheTopologyControlCommand.java (10.1.x), one common thing is both classes are referenced in ./core/src/main/java/org/infinispan/commands/RemoteCommandsFactory.java. The difference is IracClearKeysCommand.COMMAND_ID (17) is in "public CacheRpcCommand fromStream" (line 459 in 11.0.x version but not in 10.1.x one), while CacheTopologyControlCommand.COMMAND_ID (also 17) is in "public ReplicableCommand fromStream" (line 156 in 10.1.x but not in 11.0.x), which leads me to believe they serve different functions. The line number "RemoteCommandsFactory.java:264" in the OP's stack matches the "switch" and "default" part in "public ReplicableCommand fromStream" within the Infinispan 11.0.x version of RemoteCommandsFactory.java (Note IracClearKeysCommand.COMMAND_ID is instead in the other "public CacheRpcCommand fromStream" (line 459)). In the OP case the exception was thrown in the 11.0.x version of RemoteCommandsFactory.java line 264 because Command ID 17 is not in there, while mine was most likely thrown in 10.x version of same file within the same method because Command IDs 90 and 85 were not there. So this is a case of two different versions of Infinispan talking to each other. 

My previous assumption of the Hot Rod protocol version is probably not relevant.  The Hot Rod protocol versions for both branches can also be found here:
./client/hotrod-client/src/main/java/org/infinispan/client/hotrod/ProtocolVersion.java
Both defaults to the same 3.0 version.

I have not had a chance to look further into Infinispan documentations, but if there is a way to cluster/upgrade/transition Infinispan 10.x to 11.0.x to allow both versions of servers to co-exist or share the cache, such as configuring Infinispan 11.0.x to use the older 10.x command set (ie backward compatibility) we could use that to smoothly transit the Keycloak from 11 to 12 maybe? ie, to configure Keycloak 12.0.x's infinispan to use the older command set, if at all possible. 

benjam...@gmail.com

unread,
Feb 3, 2021, 4:21:43 AM2/3/21
to Keycloak User
One last note, if using one of the file based Jgroup discovery protocols such as JDBC_PING or S3_PING, simply assign a new location for the Keycloak 12.x launch, e.g. new JGROUPSPING database table or S3 bucket. This will essentially establish a separate non-interfering cache for Keycloak 12.x. This should minimize the downtime. Down side is the existing Keycloak 11.x cache would not get replicated/distributed over. Also sticky sessions might help to restrict some over-aggressive load balancers throwing users between the different versions during gradual rollout. This may help someone hopefully. 
Reply all
Reply to author
Forward
0 new messages