we will look for a public place to go, and put a link and description on
the Contributions page in the next days.
-Manuel
Hi Martin,
for some reasons, we cannot distribute the short-lived "loginContexts"
within the cluster. When the IdP does some redirects during a login
attempt, it stores some information in a so-called loginContext (which
is like a short-lived session, so some information from the first
request will be still available within the next request, till the login
succeeds). Therefor, the IdP creates a random id, stores this id as a
cookie in the httpResponse and also puts a new loginContext Object
(under this id) into the StorageService. Then, the loginContext Object
is returned for further modification.
However, after the loginContext has been created and sent to the
StorageService, you have only access to the loginContext Object and do
not know its id anymore (you cannot access cookies within the
httpResponse, and you cannot retrieve the id elsewhere). Without the id,
the post-processing servlet filter cannot put the modified loginContext
into the StorageService again (in order to make it available to other
IdP nodes).
Instead of modifying the IdP to make the loginContext id available (or
iterate the local map till we find the key which maps to the current
loginContext), we decided to have the loginContext stored local-only, as
it is only used during the login process and we were already having a
load balancer with sticky sessions (as recommended by the Shibboleth
wiki; I guess, having the load balancer sticking to the cookie
"_idp_authn_lc_key" for just a few minutes would work, too). This
decision does not affect the synchronization of other objects, though.
If the IdP node you have been using dies, each other IdP node will
recognize you and let you continue using your session.
Wondering if you considered using a distributed EhCache without Terracotta?
Best,
Bill
Hi Peter,
thanks for the repcached link. (There seem to be various clustered cache
solutions which speak the memcached protocol, see [1]; we only tested an
Infinispan cluster for this, but then stayed with memcached.)
Up to now, we do not replicate memcached's contents, but we spread data
over multiple memcached instances (by means of a hash function, which is
the default of the spymemcached library we are using), so each item is
stored only once. If a memcached node fails, its data may be lost, but
new data for this node will be stored on another node and retrieved from
there, so the cluster keeps working.
Additionally, with the IdP Memcached StorageService, each IdP keeps a
local cache for technical reasons (i.e. Java object references within
one IdP node must stay the same for each get() call). This also has the
intended side effect that, once the memcached entry is lost, the local
value will be used. As we use sticky sessions on the load balancer for
the user side, this guarantees that the user will always come back to
the same IdP node and keep his session even after a memcached node
failed. Once the user authenticates to another SP, the selected IdP node
will write the local session to memcached again, so it will be available
to the other IdP nodes for back-channel requests.
A different solution which incorporates memcached replication on our IdP
Memcached StorageService's side is being discussed (like building a
memcached cluster with one active and several passive nodes like in
Terracotta, or a memcached cluster where each item is stored on multiple
but not all nodes). This, however, may not come into consideration as
long as our current solution proves working.
-Manuel
[1] Memcached replication options?
http://www.quora.com/Memcached-replication-options
--
To unsubscribe from this list send an email to users-un...@shibboleth.net