Alan Deutscher
unread,Feb 15, 2026, 1:24:41 AM (3 days ago) Feb 15Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to rabbitmq-users
Hello all,
I've been tasked with adjusting a RabbitMQ consumer application to avoid duplicate executions in the event of an unstable connection with RabbitMQ. I wanted to post here to sanity-check my strategy.
In this environment:
* Messages are acknowledged on completion of a job.
* Messages are given a message identifier header to uniquely identify messages
* Multiple instances of the consumer worker process are pulling from the queue.
* Jobs cannot currently be cancelled partway through processing
If RabbitMQ detects that a connection is lost, the unacked message shall return to the queue where another worker may pick it up. To account for this, I'm thinking of doing the following:
1. Worker A shall acquire an exclusive lock on handling Message 1 (lock would likely be implemented in Redis)
2. Worker A's connection to RabbitMQ fails, returning Message 1 to be available for consumption by another worker.
3. Worker B picks up Message 1.
4. As Worker B is unable to acquire an exclusive lock on Message 1, it takes on the role of a "watcher", periodically trying to acquire the lock on Message 1 instead of handling Message 1 directly
5. Worker A finishes processing Message 1, and places the success/failure information in Redis
6. Worker A releases lock on Message 1
7. Worker B acquires a lock on Message 1, and acks the message on Worker A's behalf using information stored in Step 5
There's a bit more to the flowchart to account edge cases like for the nightmare event of Worker B's connection also failing, or whether or not the "watcher" work should be delegated to a separate thread rather than hold up Worker B's handling of further jobs, but the above steps are the main gist of what I'm thinking of.
Does this seem like a valid way to handle duplicate messages that have returned to the queue, or am I over-thinking it?
- Alan