This is on a production RabbitMQ cluster (2 disk nodes). We currently have about 100 connections to the cluster (not necessarily balanced between nodes). Machines have 2.5 gb and RabbitMQ allocated about 993 MB for itself. Since the cluster is very low volume currently the a Rabbit node's memory usage stays around 30-50 MB. Consumers are pretty fast so there's hardly any messages waiting in queues.
Rabbitmq user has the default hard/soft limit of 1024 currently. Given the above environment and adding a quite a bit of room to spare, what is a generally acceptable limit?
Over the next 6 months to an year this will become a very high volume environment with lot of messages waiting in queues and probably over 1000 connections. When we scale up, what should watch for as an indication that we need to increase open file descriptor limits?
_______________________________________________
rabbitmq-discuss mailing list
rabbitmq...@lists.rabbitmq.com
https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
You definitely need more file descriptors than number of connections.
As a rule of thumb I'd suggest:
- 1 file descriptor for every connection
- 3+ file descriptors for every queue that might be written to disk
- reserve about 100 descriptors for internal erlang stuff
There isn't any penalty from setting high 'ulimit -n', so why not just
set it to maximum?
On linux:
cat /proc/sys/fs/file-max
ulimit -n 1048576
Err, that's actually a max of 1 per queue.
And that is a rule of thumb - there is no problem with running a million
queues off 1000 fds - rabbit will just cope as necessary, but you might
find performance is improved with additional fds.
If your ulimit is X, then the number of connections you'll be allowed to
open is (0.9*(X - 100))-2. Thus that should give you a minimum (via
rearrangement). More fds never hurt!
Matthew