Externalize HTTP Sessions (WildFly 24)

1,301 views
Skip to first unread message

Linda Janse van Rensburg

unread,
May 3, 2022, 6:55:31 AM5/3/22
to WildFly
We are looking to externalize/cache HttpSessions exactly for the reason when the application crash/restart that it is seamless to the client.
From the documentation that I’ve read, the out-of-the-box HttpSession externalization to Infinispan, required that the application instances must be in a cluster.
Our application instances are deployed as standalone. Will it still be possible to use the out-of-the-box solution?
 
We are looking at using Redis as the external HttpSession store. I cannot find an easy way to plugin Redis. Seems you have to implement (override) ServletExtension, SessionManagerFactory and SessionManager?
 

Paul Ferraro

unread,
May 7, 2022, 2:53:30 PM5/7/22
to WildFly
On Tuesday, May 3, 2022 at 6:55:31 AM UTC-4 linda.janse...@absa.africa wrote:
We are looking to externalize/cache HttpSessions exactly for the reason when the application crash/restart that it is seamless to the client.

There are several ways to allow an HttpSession to survive beyond the lifespan of a single server:
All approaches will require your application to include <distributable/> in /META-INF/web.xml.

1. For a single server (deployed using a non-HA profile, e.g. standalone.xml), you can configure HttpSession to persist modifications made during a given request, so that they can be recovered following a crash/restart.  Within the default Infinispan subsystem configuration, the default cache configuration for the "web" cache-container uses a file store, but only for passivation.  You can instead make this persistent by using <file-store passivation="false" purge="false"/>.

The obvious limitation of a single server is that you entire application is unavailable when your server is down. Consequently, most users will deploy the application to multiple servers and enable some mechanism for supporting failover:

2. Using a distributed session manager (configured to use infinispan-session-management via the distributable-web subsystem) that leverages an embedded replicated or distributed Infinispan cache (configured via the Infinispan subsystem).
3. Using a distributed session manager (configured to use infinispan-session-management via the distributable-web subsystem) that leverages an embedded invalidation Infinispan cache that uses a shared persistent store (configured via the Infinispan subsystem).  The shared persistent store might be a shared filesystem, a database, or remote Infinispan cluster.
4. Using a distributed session manager (configured to use hotrod-session-management via the distributable-web subsystem) that persists sessions to a remote Infinispan cluster (configured via the Infinispan subsystem).
5. If you use the Spring ecosystem, you can configured your application to use an HA SessionRepository.  You can even use WildFly's distributed session management features with Spring Session via https://github.com/wildfly-clustering/wildfly-clustering-spring-session


From the documentation that I’ve read, the out-of-the-box HttpSession externalization to Infinispan, required that the application instances must be in a cluster.

Of this options listed above, only #2 and #3 require that your WildFly servers be clustered.
 
Our application instances are deployed as standalone. Will it still be possible to use the out-of-the-box solution?

What exactly do you mean by "deployed as standalone"?  WildFly uses the term "standalone" to refer to how the server is managed (as opposed to being managed by a domain controller).  This is completely orthogonal to clustering, meaning that multiple "standalone" servers can be clustered or not.
 
 
We are looking at using Redis as the external HttpSession store. I cannot find an easy way to plugin Redis. Seems you have to implement (override) ServletExtension, SessionManagerFactory and SessionManager?

WildFly does not have any OOTB support for Redis as a session store, however, a Redis implementation of Spring Session exists: https://spring.io/projects/spring-session-data-redis
Just be aware that Spring Session has several limitations w.r.t. the Jakarta Servlet specification, as described here: https://github.com/wildfly-clustering/wildfly-clustering-spring-session#notes

If you are feeling ambitious, you can implement your own Undertow SessionManagerFactory and override the DeploymentInfo of your deployed application via a ServletExtension.  This would avoid all of the caveats inherent with Spring Session + Redis.

Let me know if you have any questions about any of the above approaches.

Paul

Linda Janse van Rensburg

unread,
May 13, 2022, 7:54:24 AM5/13/22
to WildFly
Thank you for the information, it's very helpful.

Our applications are WildFly Bootable Jars (WF 24.0.1.Final).
We have a load balancer (AVI/HAProxy) in front of the applications for load balancing, high availability and where necessary, sticky sessions.
Typically an application will have 1 or more instances running on 2+ nodes.
For Example:
    Application_A (instance 1 port 9080) runs on Server_1 (node1) and on Server_2 (node2),
    Application_A (instance 2 port 9081) runs on Server_1 (node1) and on Server_2 (node2),
    Application_B (instance 1 port 8087) runs on Server_1 (node1) and on Server_2 (node2),
    Application_C (instance 1 port 8090) runs on Server_3 (node3) and on Server_4 (node4),
    Application_C (instance 2 port 8091) runs on Server_3 (node3) and on Server_4 (node4),
    Application_C (instance 3 port 8092) runs on Server_3 (node3) and on Server_4 (node4),
    Application_C (instance 4 port 8093) runs on Server_3 (node3) and on Server_4 (node4)

So option 1 and 5 is not applicable.

The applications where we want to externalize Http Session, are high volume/high concurrency applications and what we store on Http Session can be updated often during a request.
So it looks as if option 2 is not viable either.

At the moment we are doing research on your option 4.

(Side note - I don't feel that ambitious)

We will definitely have more questions on this journey and will ask questions as they arise. Thanks!

Paul Ferraro

unread,
May 13, 2022, 1:47:18 PM5/13/22
to WildFly
On Friday, May 13, 2022 at 7:54:24 AM UTC-4 linda.janse...@absa.africa wrote:
Thank you for the information, it's very helpful.

Our applications are WildFly Bootable Jars (WF 24.0.1.Final).
We have a load balancer (AVI/HAProxy) in front of the applications for load balancing, high availability and where necessary, sticky sessions.
Typically an application will have 1 or more instances running on 2+ nodes.
For Example:
    Application_A (instance 1 port 9080) runs on Server_1 (node1) and on Server_2 (node2),
    Application_A (instance 2 port 9081) runs on Server_1 (node1) and on Server_2 (node2),
    Application_B (instance 1 port 8087) runs on Server_1 (node1) and on Server_2 (node2),
    Application_C (instance 1 port 8090) runs on Server_3 (node3) and on Server_4 (node4),
    Application_C (instance 2 port 8091) runs on Server_3 (node3) and on Server_4 (node4),
    Application_C (instance 3 port 8092) runs on Server_3 (node3) and on Server_4 (node4),
    Application_C (instance 4 port 8093) runs on Server_3 (node3) and on Server_4 (node4)

So option 1 and 5 is not applicable.

The applications where we want to externalize Http Session, are high volume/high concurrency applications and what we store on Http Session can be updated often during a request.
So it looks as if option 2 is not viable either.

It is unclear to me why this requirement would preclude option #2.  Of the HA options I listed, this is actually the most performant.
 
At the moment we are doing research on your option 4.

(Side note - I don't feel that ambitious)

We will definitely have more questions on this journey and will ask questions as they arise. Thanks!

Sounds good.

Linda Janse van Rensburg

unread,
May 19, 2022, 11:12:11 PM5/19/22
to WildFly
My thinking w.r.t the viability of option 2 is as follows: The main application where we want to externalize HttpSession has a customer base of over 2 mil and growing. So we have anything from 30 000 to potentially 2 mil concurrent users, thus the amount of HttpSessions. The transaction rate is around 200 a second, thus around 200 reads/updates of HttpSession. The application is currently deployed to 20 nodes across 2 data centers (and scaling out as the customer base is growing). Thus with option 2 the network chattiness (across data centers), the number of times HttpSession will be touched will be too much for the application and will bring it down. Doesn't the Infinispan documentation also discourage embedded replication if there are more than 10 nodes?

Paul Ferraro

unread,
May 20, 2022, 9:42:31 AM5/20/22
to WildFly
Re: #2, you wouldn't want your cluster to span data centers (this would be untenable, w.r.t. bandwidth, latency, likelihood of network partitions, etc).  Instead, you would want to use a cluster per data center.
The question become whether or not you need any redundancy across data centers, which would be achieved via cross-site replication.  If so, you'll want to externalize HttpSession state (via hotrod-session-management) to an infinispan cluster on each data center, which can be configured with async cross-site replication.

Re: embedded replication with +10 nodes, you would certainly not want to use replication (since each node will need to manage 10x the number of active sessions), but, rather, distribution mode, where a given session has a fixed number of backup copies stored on other cluster members (1, by default).


Linda Janse Van Rensburg

unread,
Jul 18, 2022, 10:34:20 PM7/18/22
to WildFly
Option 4 - Configurations

The following attributes exist in the Infinispan sub system / remote-cache-container:

component=connection-pool/exhausted-action
component=connection-pool/max-active
component=connection-pool/max-wait
component=connection-pool/min-evictable-idle-time
component=connection-pool/min-idle
remote-cluster
connection-timeout
marshaller
max-retries
protocol-version
socket-timeout
statistics-enabled
tcp-keep-alive
tcp-no-delay
transaction-timeout

Do they have a relation with the following HotRod properties / are the same? [https://docs.jboss.org/infinispan/13.0/apidocs/org/infinispan/client/hotrod/configuration/package-summary.html]

infinispan.client.hotrod.connection_pool.exhausted_action
infinispan.client.hotrod.connection_pool.max_active
infinispan.client.hotrod.connection_pool.max_wait
infinispan.client.hotrod.connection_pool.min_evictable_idle_time
infinispan.client.hotrod.connection_pool.min_idle
infinispan.client.hotrod.cluster
infinispan.client.hotrod.connect_timeout
infinispan.client.hotrod.marshaller
infinispan.client.hotrod.max_retries
infinispan.client.hotrod.protocol_version
infinispan.client.hotrod.socket_timeout
infinispan.client.hotrod.statistics
infinispan.client.hotrod.tcp_keep_alive
infinispan.client.hotrod.tcp_no_delay
infinispan.client.hotrod.transaction.timeout

On Saturday, 7 May 2022 at 20:53:30 UTC+2 paul.f...@redhat.com wrote:

Paul Ferraro

unread,
Jul 19, 2022, 9:34:04 AM7/19/22
to Linda Janse Van Rensburg, WildFly
Yes - these are directly related, with a couple of caveats:
* In EAP, the marshaller attribute accepts one of 2 enumerated values:
JBOSS marshalling and PROTOSTREAM. Whereas the
"infinispan.client.hotrod.marshaller" expects a class name.
* Infinispan defines its default remote-cluster using the
"infinispan.client.hotrod.uri" or
""infinispan.client.hotrod.server_list" properties, and defines
alternate remote-clusters via the "infinispan.client.hotrod.cluster.*"
properties. In contrast, EAP defines each cluster via a
/subsystem=infinispan/remote-cache-container=*/remote-cluster=*
resource, and uses the default-remote-cluster attribute to reference
the default remote-cluster.

Paul
> --
> You received this message because you are subscribed to a topic in the Google Groups "WildFly" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/wildfly/zm8XdFAeWFg/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to wildfly+u...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/wildfly/30d93cc2-9cce-4736-b753-f167b88bd347n%40googlegroups.com.

Paul Ferraro

unread,
Jul 19, 2022, 9:37:39 AM7/19/22
to Linda Janse Van Rensburg, WildFly
I should also add that EAP also supports adhoc properties to be
interpreted as hotrod client properties via the "properties"
attribute, e.g.
/subsystem=infinispan/remote-cache-container=foo:map-put(name=properties,
key=infinispan.client.hotrod.tcp_no_delay, value=true)

Linda Janse Van Rensburg

unread,
Aug 1, 2022, 1:39:37 AM8/1/22
to WildFly
Within a WildFly bootable app (WildFly 24.0.1.Final), which property takes precedence? The hotrod properties or the infinispan/remote-cache-container?
For example:
subsystem=infinispan/remote-cache-container/statistics-enabled's default value is false
ininispan.client.hotrod.statistics default value is ENABLED

subsystem=infinispan/remote-cache-container/ component=connection-pool's max-wait and max-active don't have default values
infinispan.client.hotrod.connection_pool.max_active and max_wait's deault value is -1 

Paul Ferraro

unread,
Aug 1, 2022, 8:56:20 AM8/1/22
to WildFly
See below.

On Monday, August 1, 2022 at 1:39:37 AM UTC-4 ljansevan...@gmail.com wrote:
Within a WildFly bootable app (WildFly 24.0.1.Final), which property takes precedence? The hotrod properties or the infinispan/remote-cache-container?
For example:
subsystem=infinispan/remote-cache-container/statistics-enabled's default value is false
ininispan.client.hotrod.statistics default value is ENABLED

Formal attributes (whether defined not) will generally take precedence over adhoc properties.
In your above example, statistics will be disabled.
 
subsystem=infinispan/remote-cache-container/ component=connection-pool's max-wait and max-active don't have default values
infinispan.client.hotrod.connection_pool.max_active and max_wait's deault value is -1 

Similarly, the formal attributes of the management model (whether defined or not) take precedence over adhoc properties.

If the max-wait or max-active attributes of the component=connection-pool resource are undefined, they will be converted to the appropriate default value expected by Infinispan to represent "infinity" (i.e. -1).
These 2 attributes are an example of WildFly "sanitizing" the configuration that we expose to the user.  In this case, if a connection pool has no maximum size (i.e. the maximum size in infinite), WildFly represents this as "undefined", whereas Infinispan internally represents this by the (in my opinion) less intuitive value of "-1".

Linda Janse Van Rensburg

unread,
Aug 12, 2022, 12:56:12 AM8/12/22
to WildFly
WildFly Full 24 Model Reference: I see subsystem=infinispan/remote-cache-container/near-cache is deprecated.

Should the HotRod per-cache properties rather be used?
    infinispan.client.hotrod.cache.cachename.near_cache.mode
    infinispan.client.hotrod.cache.cachename.near_cache.max_entries

Linda Janse Van Rensburg

unread,
Aug 12, 2022, 7:37:41 AM8/12/22
to WildFly
While testing a simple implementation or option 4, I have a few questions.

WildFly 24.0.1.Final
Bootable jar 5.0.2.Final

2 application instances / AVI loadbalancer
1 infinispan server

Configurations:
granularity="SESSION"
no-affinity
default-session-timeout="6"
web.xml: <distributable/>
Rest of the configurations – default (I haven't enabled near cache, to my knowledge it is then disabled)
I've enabled HotRod logging in the application

First request to application instance 1:

If the user object is null on HttpSession, creates it and adds it to HttpSession

  • About to add (K,V): (SessionCreationMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionCreationMetaData{created=2022-08-12T08:56:17.019Z, max-inactive-interval=PT0S})
  • About to add (K,V): (SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionAccessMetaData{since-creation=PT0Slast-access=PT0S})
  • About to add (K,V): (SessionAttributesKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), {})
  • About to add (K,V): (SessionCreationMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionCreationMetaData{created=2022-08-12T08:56:17.019Z, max-inactive-interval=PT6M})
  • About to add (K,V): (SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionAccessMetaData{since-creation=PT0Slast-access=PT1S})
  • About to add (K,V): (SessionAttributesKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), {user=za.co.session.hotrod.User@69875b13})
  • Infinispan Admin Console:       Stores = 6
The logs that I see, that is before it communicates to the Infinispan Server to add the session to Infinispan serveer's cache?
So looking at the above logs it corresponds to the forum discussion https://groups.google.com/d/msgid/wildfly/91ec9beb-c46c-447e-9ecc-f26273d2b275n%40googlegroups.com that 3 cache entries are created in Infinispan server for the HttpSession

The Stores=6 statistic in the console, I assume means that there was 6 calls to the server to add or update cache entries. Is there anything that indicates the number of HttpSessions cached in the Infinispan server?

In the application instance, does WildFly store the HttpSession in memory for the duration of the request?

Second request to instance 2

Gets the user object and updates a timestamp on it

  • About to getAll entries ([SessionCreationMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI)])
  • About to add (K,V): (SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionAccessMetaData{since-creation=PT42.952Slast-access=PT1S})
  • About to add (K,V): (SessionAttributesKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), {user=za.co.session.hotrod.User@4dce4a64})
  • Infinispan Admin Console: Hits=3 Stores=8 Retrievals=3
In this instance it will fetch the session information from the Infinispan server, then after the change to the UserObject it will update the cache entries on the Infinnispan server for the session.

Third request to instance 1

Gets the user object and updates a timestamp on it

  • About to getAll entries ([SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SessionCreationMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI)])
  • About to add (K,V): (SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionAccessMetaData{since-creation=PT1M46.85Slast-access=PT1S})
  • About to add (K,V): (SessionAttributesKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), {user=za.co.session.hotrod.User@776136ee})
  • Infinispan Admin Console: Hits=6 Stores=10 Retrievals=3
In this instance it will fetch the session information from the Infinispan server, then after the change to the UserObject it will update the cache entries on the Infinnispan server for the session.

No more requests to any of the application instances until the session times out.
I assume the container communicates the session timeout and HotRod will now communicate to Ininispan Server that the session expired ? This happens in both the application instances as far as I can see (?) but it will not try and delete the session stored on Infinispan server twice, only once?
The Infinispan Admin Console shows the following: Infinispan Admin Console: Hits=7 Stores=10 Retrievals = 7 Remove Hits = 3 Remove misses = 1. Is the remove misses because the session was already removed by one of the application instances and the 2nd instance couldn't remove it? And the remove hits = 3, that is equal to the 3 entries of the 1 session?

In the following scenarios, what happens to the session entries on the Infinspan server?
Scenario 1: First request went to application instance 1. No more requests after the first one. For some (funny) reason someone shuts down application instance 1 before the session times out. 
Scenario 2: First request went to application instance 1. No more requests after the first one. Application instance 1 crashes.

On Saturday, 7 May 2022 at 20:53:30 UTC+2 paul.f...@redhat.com wrote:

Paul Ferraro

unread,
Aug 12, 2022, 8:42:33 AM8/12/22
to WildFly
In general, near cache configuration was refactored to be per-cache instead of per-container since Infinispan 11.0.x.
e.g.
RemoteCacheContainer container = ...;
container.getConfiguration().addRemoteCache("foo", builder -> builder.nearCacheMode(NearCacheMode.INVALIDATED).nearCacheMaxEntries(100));
RemoteCache<String, Object> cache = container.getCache("foo");

That said, when using hotrod-session-management, the near cache is auto-configured based on the <max-active-sessions/> value from jboss-web.xml.

Paul

Paul Ferraro

unread,
Aug 12, 2022, 9:37:44 AM8/12/22
to WildFly
Comments inline...

On Friday, August 12, 2022 at 7:37:41 AM UTC-4 ljansevan...@gmail.com wrote:
While testing a simple implementation or option 4, I have a few questions.

WildFly 24.0.1.Final
Bootable jar 5.0.2.Final

2 application instances / AVI loadbalancer
1 infinispan server

Configurations:
granularity="SESSION"
no-affinity

You should use affinity=local to prevent stale session reads by concurrent requests for the same session.
 
default-session-timeout="6"
web.xml: <distributable/>
Rest of the configurations – default (I haven't enabled near cache, to my knowledge it is then disabled)

As I mentioned in my previous comment, near caching is auto-configured based on the <max-active-sessions/> in jboss-web.xml.
By default, there is no maximum, thus near caching is disabled.

I've enabled HotRod logging in the application

First request to application instance 1:

If the user object is null on HttpSession, creates it and adds it to HttpSession

  • About to add (K,V): (SessionCreationMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionCreationMetaData{created=2022-08-12T08:56:17.019Z, max-inactive-interval=PT0S})
  • About to add (K,V): (SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionAccessMetaData{since-creation=PT0Slast-access=PT0S})
  • About to add (K,V): (SessionAttributesKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), {})
  • About to add (K,V): (SessionCreationMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionCreationMetaData{created=2022-08-12T08:56:17.019Z, max-inactive-interval=PT6M})
  • About to add (K,V): (SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionAccessMetaData{since-creation=PT0Slast-access=PT1S})
  • About to add (K,V): (SessionAttributesKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), {user=za.co.session.hotrod.User@69875b13})
  • Infinispan Admin Console:       Stores = 6
The logs that I see, that is before it communicates to the Infinispan Server to add the session to Infinispan serveer's cache?

The first 3 log messages correspond to the RemoteCache.put(...) operations in response to session creation, i.e. HttpServletRequest.getSession(true).
The last 3 log messages correspond to the RemoteCache.put(...) operations to update the remote cache at end of the request.
 
So looking at the above logs it corresponds to the forum discussion https://groups.google.com/d/msgid/wildfly/91ec9beb-c46c-447e-9ecc-f26273d2b275n%40googlegroups.com that 3 cache entries are created in Infinispan server for the HttpSession

The Stores=6 statistic in the console, I assume means that there was 6 calls to the server to add or update cache entries. Is there anything that indicates the number of HttpSessions cached in the Infinispan server?
 
Session statistics are available via the WildFly CLI.
e.g.
/deployment=foo.war/subsystem=undertow:read-attribute(name="active-sessions")

Alternatively, you can divide the number of entries in the remote cache by 3 (though this only works for SESSION granularity).

In the application instance, does WildFly store the HttpSession in memory for the duration of the request?

Yes, though not the HttpSession facade itself, just the underlying data structure.  Concurrent requests for the same session will reference the same data structure.

Second request to instance 2

Gets the user object and updates a timestamp on it

  • About to getAll entries ([SessionCreationMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI)])
  • About to add (K,V): (SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionAccessMetaData{since-creation=PT42.952Slast-access=PT1S})
  • About to add (K,V): (SessionAttributesKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), {user=za.co.session.hotrod.User@4dce4a64})
  • Infinispan Admin Console: Hits=3 Stores=8 Retrievals=3
In this instance it will fetch the session information from the Infinispan server, then after the change to the UserObject it will update the cache entries on the Infinnispan server for the session.

Correct.
At this point, I would also suggest making za.co.session.hotrod.User an immutable object.  This way, you can avoid a potentially unnecessary put operation if your User is not modified.

Third request to instance 1

Gets the user object and updates a timestamp on it

  • About to getAll entries ([SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SessionCreationMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI)])
  • About to add (K,V): (SessionAccessMetaDataKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), SimpleSessionAccessMetaData{since-creation=PT1M46.85Slast-access=PT1S})
  • About to add (K,V): (SessionAttributesKey(Ac-3z6Ibg-oAQEPlXcGXKGlpkDariI03JniyzFSI), {user=za.co.session.hotrod.User@776136ee})
  • Infinispan Admin Console: Hits=6 Stores=10 Retrievals=3
In this instance it will fetch the session information from the Infinispan server, then after the change to the UserObject it will update the cache entries on the Infinnispan server for the session.

No more requests to any of the application instances until the session times out.
I assume the container communicates the session timeout and HotRod will now communicate to Ininispan Server that the session expired ? This happens in both the application instances as far as I can see (?) but it will not try and delete the session stored on Infinispan server twice, only once?

Expiration handling for hotrod-session-management is completely different than the embedded cache-based infinispan-session-management.

When using infinispan-session-management, session expiration is scheduled by the WF instances themselves, based on consistent hashing of the session identifier.  In constrast, when using hotrod-session-management, session expiration is handled by infinispan-server.
Our initial RemoteCache.put(...) of the session creation metadata entry specified a max-idle value based on the max-active-interval of the session (i.e. 6 minutes).
The session manager on each WF instance registers a @ClientCacheEntryExpired listener.  Each WF instance will receive the same event, and all will attempt to remove the session access meta data entry.  Only one WF instance will do this successfully (i.e. the RemoteCache.remove(...) will return a non-null value).  The WF instance that did this successfully will trigger the requisite HttpSessionListener.sessionDestroyed(...) event and remove any associated session attribute entries.
 
The Infinispan Admin Console shows the following: Infinispan Admin Console: Hits=7 Stores=10 Retrievals = 7 Remove Hits = 3 Remove misses = 1. Is the remove misses because the session was already removed by one of the application instances and the 2nd instance couldn't remove it? And the remove hits = 3, that is equal to the 3 entries of the 1 session?

The remove miss corresponds to the WF instance that did not ultimately trigger the HttpSessionListener.sessionDestroyed(...) event, since its RemoteCache.remove(...) operation returned null.
The 3 remove hits correspond to the session creation metadata entry removal due to the max-idle expiration, the successful removal of the session access metadata entry, and the successful removal of the session attributes entry.
 
In the following scenarios, what happens to the session entries on the Infinspan server?
Scenario 1: First request went to application instance 1. No more requests after the first one. For some (funny) reason someone shuts down application instance 1 before the session times out. 

The session will expire within infinispan-server and the expiration handled by application instance 2.
 
Scenario 2: First request went to application instance 1. No more requests after the first one. Application instance 1 crashes.

The handling is the same as scenario 1.

These were all great questions, by the way.  Let me know if you have any others.

Paul

Linda Janse Van Rensburg

unread,
Aug 23, 2022, 7:14:50 AM8/23/22
to WildFly
Can you please explain the communication between the Infinispan Server and application for the session expiry in more detail? 
Is there something special in the HotRod client that makes this possible?
I cannot see that the Infinispan Server is opening a connection to the application instances(?) as the only config changes I've made in Infinispan server is to set the infinispan.bind.address to 0.0.0.0.

Thank you!

Paul Ferraro

unread,
Aug 29, 2022, 9:08:08 AM8/29/22
to WildFly
Reply all
Reply to author
Forward
0 new messages