Doing something between pop and push in an atomic way

168 views
Skip to first unread message

Antonio Antonucci

unread,
Jul 11, 2021, 1:32:41 PM7/11/21
to Redis DB
Hi everybody,
I am writing a piece of code where I basically need to pop the first available data from a list called "data_to_manage".

Now, if the popped value is NOT a member of a specific set called "data_to_delete", then the code will do some stuffs and push again the data in the "data_to_manage" list, in order to be re-worked.

Otherwise, if the popped value results as a member of the "data_to_delete" set, I will need to delete that data from the set without pushing again it in the list.

With all that said, I would like to send all these commands in a row, however I can't use MULTI / EXEC with an L|R/POP as I wouldn't know the result of the pop until the EXEC is completed.

Additionally, the LMOVE / RPOPLPUSH wouldn't allow me to make some actions in the middle.

Does anybody have any clue of how this can be addressed?

Thanks a lot

Itamar Haber

unread,
Jul 11, 2021, 1:47:41 PM7/11/21
to Redis DB
Hello Antonio,

Please have a look-see at https://redis.io/commands/eval and consider something like the following (assuming LPOP):

```lua
local list = KEYS[1]
local set = KEYS[2]
local el = redis.call('LRANGE', list, 0, 0)
if redis.call('SISMEMBER', set, el) then
  -- do some code stuff if needed or whatever, and also choose whether to LPOP
else
  -- do some other code stuff, if needed, and choose to LPOP if we haven't already, i.e.:
  redis.call('LPOP', list)
end

return 'woot!!11one'
```

Cheers,
Itamar
Message has been deleted
Message has been deleted

Antonio Antonucci

unread,
Jul 11, 2021, 3:32:25 PM7/11/21
to Redis DB
Hi Itamar,
Thanks for your prompt reply, however I am not sure it's addressing what I need.
The reason is that I have multiple jobs running conteporarily and when I will reach the LPOP I am not sure the value I am popping is still referring to that identified by the LRANGE.
Indeed in this case, I might have multiple jobs getting the same value from LRANGE and when it comes to LPOP only one of them will pop accordingly, while the others not.

Below you will find my python code.
Now, imagine that a scheduler run such piece of code continuosly on multiple nodes, that means I can have multiple jobs running conteporarily.
The code below works without issue, however what happens if for some reason I have a glitch soon after the blpop???
My data structure is no longer consistent!!!

That's why I am trying to understand how to run such piece of code in a multi /exec
However, if I have the blpop inside a multi / exec, I wouldn't know the result of the pop until the EXEC is completed.
Additionally, the LMOVE / RPOPLPUSH wouldn't allow me to make me any actions in the middle.
Thanks for your time gents

===

user = r.blpop(["Q_active_users"])[1]

if not r.sismember("S_users_to_delete", user):
    # ...
    # doing something here
    # ...
    r.rpush("Q_active_users", user)
    continue

p = r.pipeline()
p.multi()
# ...
# doing something here
# ...
p.srem("S_users_to_delete", user)
p.execute()

Kunal Gangakhedkar

unread,
Jul 12, 2021, 1:48:45 AM7/12/21
to redi...@googlegroups.com
What you need is a distributed lock manager - to serialise access to the list from multiple jobs (assuming they're not running on the same physical/virtual machine with shared memory).
If the jobs are running on the same physical/virtual memory, then you can simply use the local in-process/shared memory mutex - in python, this can be achieved with threading.Lock (for in-process mutex) or multiprocessing.Lock (or other mechanisms for synchronisation between different processes).

You can simply use redis as a DLM as well with SETNX - the SETNX page outlines a simple locking mechanism.
For more robust algo, please check:

You can then acquire the lock before you access/modify all the shared lists.

Hope this helps.

Thanks,
Kunal


--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/redis-db/037942ed-49c8-46f3-ac7a-042130ba0375n%40googlegroups.com.

Antonio Antonucci

unread,
Jul 21, 2021, 4:57:33 AM7/21/21
to Redis DB
Hi Kunal, thanks for your reply. I have carefully read your answer, however I am not sure it is addressing what I need.
Let me further explain ... your assumption is correct, the jobs are NOT running on the same physical / virtual machine, however I can get the some sort of serialization by using a blpop. In this case, I am sure that each job wil get a different user. However what happens if for some reason I have a glitch soon after the blpop??? I have lost a possible user that I would need to reuse.
Below you will find my python code.

user = r.blpop(["Q_active_users"])[1]

if not r.sismember("S_users_to_delete", user):
    # ...
    # doing something here
    # ...
    r.rpush("Q_active_users", user)
    continue

p = r.pipeline()
p.multi()
# ...
# doing something here
# ...
p.srem("S_users_to_delete", user)
p.execute()

Thanks a lot
" Antonio
Reply all
Reply to author
Forward
0 new messages