Is using multiple queue-to-queue bound shovels with direct routing topology faster than single excha

524 views
Skip to first unread message

abhijit damle

unread,
Jun 15, 2015, 12:56:55 AM6/15/15
to rabbitm...@googlegroups.com
I have 3 applications/consumers (lets call thm processors) connected to a RabbitMQ cluster made of 3 nodes setup in HA mode in a network. I have few (could be 0 to 200) applications/consumers (lets call them clients) connected to a RabbitMQ cluster made of 2 nodes setup in HA mode in a separate network. Since these two networks are different I am presently using Shovels to transport messages between these 2 RabbitMQ clusters. At present I am using direct exchanges with pre-configured queue-to-queue dynamic shovels to receive messages on processor RabbitMQ cluster from clients. But for replying, I am using direct exchanges and programatically/runtime created queue-to-queue Dynamic shovels on the fly to send messages back to clients from processors and deleting these runtime programatically created dynamic shovels as well as queues once message is delivered and consumed by clients. Would it be less performant (slower) if I instead use topic based exchange and pre-configured single exchange-to-exchange shovel and publish messages using routing keys?

Michael Klishin

unread,
Jun 15, 2015, 1:25:34 AM6/15/15
to rabbitm...@googlegroups.com, abhijit damle
On 15 June 2015 at 07:56:58, abhijit damle (abhiji...@gmail.com) wrote:
> Would it be less performant (slower) if I instead use topic based
> exchange and pre-configured single exchange-to-exchange
> shovel and publish messages using routing keys?

Shovel means you first consume messages, then publish them, potentially over the network,
as opposed to sending them to queue masters "directly".
Chances are, the most efficient way to do request/response in your case would
be Direct Reply To [1]. 

Oh, and another suggestion. Never trust anybody giving you throughput advice from
a very brief description of a problem. Those responses are worth as much as a
guess from a complete stranger. Measure and see for yourself.

1. http://www.rabbitmq.com/direct-reply-to.html
--
MK

Staff Software Engineer, Pivotal/RabbitMQ


Michael Klishin

unread,
Jun 15, 2015, 3:38:32 AM6/15/15
to abhijit damle, rabbitm...@googlegroups.com
+rabbitmq-users 

On 15 June 2015 at 10:33:49, abhijit damle (abhiji...@gmail.com) wrote:
> Rpc replyto won't work when replyto queue is on an exchange which
> is on a different rabbitmq broker in different network

That's true. A single Shovel has limited throughput, so if you can (and need to)
use multiple ones, it may be a good idea. Keep in mind that the link will be your
limiting factor at some point, not the number of Shovels or  CPU cores.

As for exchange type, unless you have a lot of bindings,
this will be a drop in the bucket in terms of how much time it takes.

abhijit damle

unread,
Jun 15, 2015, 5:11:20 PM6/15/15
to rabbitm...@googlegroups.com, abhiji...@gmail.com
Hi Michael,

          Thanks for your earlier responses. So having single exchange-to-exchange shovel with multiple queues (potentially 200 clients/consumers) binding to exchange will be take more time to deliver messages to the queues compared to having multiple queue-to-queue shovels (potentially 1 per queue) binding to exchange?

Regards,
Abhijit

Michael Klishin

unread,
Jun 15, 2015, 5:16:06 PM6/15/15
to rabbitm...@googlegroups.com, abhijit damle
On 16 June 2015 at 00:11:22, abhijit damle (abhiji...@gmail.com) wrote:
> So having single exchange-to-exchange shovel with multiple
> queues (potentially 200 clients/consumers) binding to exchange
> will be take more time to deliver messages to the queues compared
> to having multiple queue-to-queue shovels (potentially 1 per
> queue) binding to exchange?

It will have a limited degree of parallelism for message transfer.
However, since you will be transferring just 1 message set instead of 200
(worst case), it may still be more efficient. There are many factors
that may tip the scales.

That's why measuring with your specific workload is so important. 
Reply all
Reply to author
Forward
0 new messages