I've got a solution to your problem, which I've written in Python to be as explicit as possible (code is better than description). Note that the Lua code contained in this is *not* cluster safe.
Structures used:
STRING for holding session data:
<session key> -> <session data>
ZSET for holding known sessions for the account:
<account> -> {<session key>: <expiration time>, ...}
ZSET for global session registry to clean up entries without requiring full 10 minute timeout for all of an account's sessions:
all: -> {'<account>:<session key>': <expiration time>, ...}
Whenever you have new information about an account session, you would call the update_session() function defined as follows:
def update_session(conn, account, s_key, s_data, all_key='all:', expire=600):
expires = time.time() + expire
# Use a MULTI/EXEC transactional pipeline
conn.pipeline(True) \
.setex(s_key, expire, s_data) \
.zadd(account, **{s_key: expires}) \
.expire(account, expire) \
.zadd(all_key, **{account + ":" + s_key: expires}) \
.expire(all_key, expire) \
.execute()
To clean up old sessions, you should periodically run the following Lua script directly, or the Python function. The function could be run as part of a daemon every few seconds or minutes, depending on how many sessions it needs to clean out each step, and how aggressive you want to be in your cleanup efforts. Note that calling the Python function will attempt to clean out *every* expired session it can find until it is done, so it has an unknown total execution time. Note that it makes multiple Lua script calls during this process, so Redis *can* continue to handle other requests as the cleanup is occurring. You can give it an upper limit by passing max_cleaned. You can also call the Lua script directly, which allows for a tunable number of sessions to clean out with each call (also exposed in the Python function), defaulting to 20. Up to around 100 per call should be reasonably safe, but the higher the number of items to be cleaned per Lua call, the higher the command latency, which could affect other commands running in Redis.
lua_script = '''
local to_clean = redis.call(
'ZRANGEBYSCORE', KEYS[1], '-inf', ARGV[1], 'LIMIT', ARGV[2] or 20)
for i, val in irange(to_clean) do
local account = string.match(val, '([^:]+):')
local session = string.match(val, ':(.+)')
-- removes the session from the account listing
conn.zrem(account, session)
-- removes the global entry that we just used
conn.zrem(KEYS[1], val)
end
return #to_clean
'''
def cleanup(conn, all_key='all:', count=20, max_cleaned=2**64):
cleaned = count
total_cleaned = 0
while cleaned == count and total_cleaned < max_cleaned:
cleaned = conn.execute_command('EVAL',
lua_script, 1, all_key, time.time(), count)
total_cleaned += cleaned
return total_cleaned
To get the list of valid sessions for a given account, you only need to perform:
conn.zrangebyscore(account, time.time(), "inf")
That will be correct, regardless of whether the cleanup() function or lua script has been called recently.
As a bonus feature, if every client stopped making calls and Redis was left alone, Redis should clean itself up entirely after 10 minutes. :)
Regards,
- Josiah