Policy not applied to pre-existing queue

746 views
Skip to first unread message

Jérémie

unread,
Feb 10, 2022, 12:46:42 PM2/10/22
to rabbitmq-users
Hello,

I created a policy to add the option "max-in-memory-length" to my quorum queues. new queues matching the pattern get the option and seams to behave accordingly, however pre existing queues don't.

The web interface of the preexisting queues will update a few second after the policy creation with the correct "Effective policy definition" value but the max-in-memory-length is never applied to the queue.

I tried emptying the queue, unbinding/rebinding the exchange and do a rolling restart of the cluster but the "In memory ready" messages count still end up exceeding the limit of the policy.
If I delete the queue and re-create it, the policy work as expected.

Once the policy is created and work on a queue if I update the value of "max-in-memory-length" I do not need to delete and re-create the queue for the queue to use the new value.

My RabbitMQ version is 3.8.19 deployed on kube with the RabbitMQ Operator on 3 nodes. The quorum queues have no other options except `durable:true`

Is this an expected behavior ? How can I apply this policy to all my existing queues without re-creating them all ?

Thanks

Michal Kuratczyk

unread,
Feb 10, 2022, 2:16:14 PM2/10/22
to rabbitm...@googlegroups.com
Hi,

I'd need to check why it's not applied to existing queues, but the good news is - upcoming QQv2 implementation just always behaves like max-in-memory-length=0.
QQv2 is just a code name for some internal implementation changes and new features like DLX and we hope to ship it very soon. Once you upgrade to a version with QQv2,
all your existing quorum queues will become QQv2 (no code or configuration changes required). Would this be a sufficient solution for you?

Best,

--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/rabbitmq-users/559364b2-6cbd-4817-94ba-5c56aa633ea6n%40googlegroups.com.


--
Michał
RabbitMQ team

Jérémie

unread,
Feb 11, 2022, 4:28:16 AM2/11/22
to rabbitmq-users
Hi Michał, thank you for the answer.

The QQv2 Might indeed feet my use case (multiple jobs were a publisher fill up a queue being slowly consumed by multiple consumers) so my goal here is to limit memory usage during the short period of time when multiple queues receive a lot of messages at the same time and avoid hitting the memory watermark.
The max-in-memory-length=0 seams to help reduce the memory but I'm still trying to understand why my queue process memory stay high even after publishing stop and a garbage collect is run on the nodes.

In any case an upgrade of RabbitMQ might be a long term solution but I'll need to find a way to work with the policies in the mean time, and if possible avoid recreating the queues

Michal Kuratczyk

unread,
Feb 11, 2022, 5:50:20 AM2/11/22
to rabbitm...@googlegroups.com
Hi,

It seems to work for me (using the latest version).

# declare a queue "qq" with max-length and no max-in-memory-length set and send 1000 messages
$ perf-test -qq -u qq -y 0 -C 1000 --queue-args "x-max-length=100"

# check that the queue contains 100 messages, all of them in RAM
$ rabbitmqctl list_queues name messages_ram
name    messages    messages_ram
qq      100             100

# apply a policy
$ rabbitmqctl set_policy qq-ram "qq" '{"max-in-memory-length": 0}'

# at this point still all messages are in RAM - the policy is applied to the existing queue but not the messages already in it
$ rabbitmqctl list_queues name messages_ram
name    messages    messages_ram
qq      100             100

# publish another 100 messages (previous messages are pushed out of the queue)
$ perf-test -qq -u qq -y 0 -C 1000 --queue-args "x-max-length=100"

# no messages are in RAM anymore
$ rabbitmqctl list_queues name messages_ram
name    messages    messages_ram
qq      100             0

Let us know if the above gives you a different result (you can find perf-test here: https://github.com/rabbitmq/rabbitmq-perf-test)

Best,



--
Michał
RabbitMQ team

Jérémie

unread,
Feb 14, 2022, 4:34:54 AM2/14/22
to rabbitmq-users
Hi,
I ran your test on a fresh rabbimq docker instance (using 3.8.19) And I have the same results as you.
 
My problem was I used a policy applied to queue only and publishing to an exchange. It is now working on pre-existing queues if I apply the policy to exchange and queue.

Thanks you for your time Michał
Reply all
Reply to author
Forward
0 new messages