Hi RabbitMQ
I've been really trying to get around this one queue on a server in a cluster mode for weeks now - the problem with clustering nodes is that the physical queue sits on one server and can become a problem when load start to increase
My idea is to use the same queue on different servers and distribute the load over the servers
The only way I can do this is if I use none clustered servers and create the same queue name "myworker" and with a load balancer in front of rabbitMQ split the load across - but the nice feature of a cluster is again, you don't need to keep track of things - so I am in between clusters and queue on all servers :)
What I am trying to do is really simple
- Devices in field report to rabbit on their own queue
- Workers listen for message on worker-queue
- Device post any info to the worker-queue
- worker can respond to deivce-queue
In short
- Device listen on own queue for message
- Workers listen on worker-queue for mesg from devices
- Workers can post request to device using device-queue
Wtih thousands and thousands of devices sending messages to a single queue on a single server will kill the system - so we need to run same queue on all servers and allow the workers to dequeu messages
So if we use a single server this works. If use lots of servers not in cluster mode it will work, but then we need to keep track of where the device reported the first time - aka the device must be routed to a dedicated broker and never talk to any other server but that server - need to keep a track record almost like ldap
If we use clustering, the cluster keeps track of where this dedicated queue for the device will be - but then you can not run same queue on all nodes - :|
What I did find is the federation queues will work for running "clustered", but clustering and federation is becoming so complex setup for me, I worried I not going to keep everything in sync and in long term keeping this up with upgrades ext will become tricky - I already worried about upgrading a clustered RMQ :) - adding federation queues is just another puzzle to keep in mind when doing things like upgrading or when things goes down
My final short version is
- I am now looking at running stand alone nodes and build a dedicated routing system for devices
But maybe somebody have done something like this - with worker queues and clusters with devices own queues keeping things easy to scale, upgrade and without 2 much worries if a node goes down