Rabbitmq mqtt broker uses a lot of memory

79 views
Skip to first unread message

D Joshi

unread,
May 6, 2021, 2:12:08 PM5/6/21
to rabbitmq-users

I am evaluating rabbitmq as mqtt broker and currently doing benchmark tests to check performance. Using benchmark tool https://github.com/takanorig/mqtt-bench I tried publishing 1 byte messages for 10000 clients. The memory consumption by rabbitmq for these numbers is 2gb and it's the same for 10000 subscriptions as well. Here are the consumption details provided by rabbitmq-diagnostics memory_breakdown

connection_other: 1.1373 gb (55.89%) other_proc: 0.3519 gb (17.29%) allocated_unused: 0.1351 gb (6.64%) other_system: 0.0706 gb (3.47%) quorum_ets: 0.0675 gb (3.32%) plugins: 0.0555 gb (2.73%) binary: 0.0482 gb (2.37%) mgmt_db: 0.035 gb (1.72%)


This means that the broker server is taking 200KB per connection, which seems to me a big number, considering that we need to scale our system to 1million connections in future and then we would need to provide around 200gb for just rabbitmq.

I have tried playing with some settings in my conf file and docker command

mqtt.allow_anonymous=false 
collect_statistics_interval = 240000 
management.rates_mode = none 
mqtt.tcp_listen_options.sndbuf = 1000 
mqtt.tcp_listen_options.recbuf = 2000 
mqtt.tcp_listen_options.buffer = 1500

Below is the docker command where I've tried to reduce tcp_rmem and tcp_wmem size as well

docker run -d --rm -p 8883:8883 -p 1883:1883 -p 15675:15675 -p 15672:15672 -v /home/ubuntu/certs:/certs --sysctl net.core.somaxconn=32768 --sysctl net.ipv4.tcp_max_syn_backlog=4096 --sysctl net.ipv4.tcp_rmem='1024 4096 500000' --sysctl net.ipv4.tcp_wmem='1024 4096 500000' -e RABBITMQ_VM_MEMORY_HIGH_WATERMARK=0.9 -e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 2000000" -t probusdev/hes-rabbitmq:latest


Are there any other settings I can try to reduce the memory consumption?

Michal Kuratczyk

unread,
May 10, 2021, 3:35:41 AM5/10/21
to rabbitm...@googlegroups.com
Hi,

This is definitely a non-trivial challenge. Reducing per-connection memory, while always nice, is likely one of the least important aspects - 200GB of RAM is not that much in the grand scheme of (internet of) things. ;)
Some aspects that need to be considered:
1. Can your O/S even handle a million connections (or, say, a third of a million assuming a 3-node cluster and perfect distribution). It probably can after some tuning but almost certainly not without tuning.
2. RabbitMQ keeps per-connection and per-channel stats - I'd suggest turning them off (https://www.rabbitmq.com/management.html#disable-stats; https://www.youtube.com/watch?v=NWISW6AwpOE)
3. Your exchange/queue topology is critical as well - if you have a queue per device, that also means a million queues. Types of queues, queue features you want to use - everything matters if you have a lot of them.
4. How do you plan to perform maintenance (eg. upgrades) of this cluster - blue-green or in-place? If in place, you need to make sure the cluster can handle the traffic with just two nodes temporarily.
5. It's great that you started by running some simulation but there is definitely more to be done. It's not like anyone can just provide with the "correct" configuration for a system of this scale.

Feel free to reach out (via email or RabbitMQ slack) if you would like to have a meeting to discuss some of the challenges. We are definitely interested in how people use RabbitMQ in IoT scenarios and how we can make RabbitMQ better.

Best,


--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/rabbitmq-users/8a6f40dd-dfe3-41fe-82da-998a43f4e61dn%40googlegroups.com.


--
Michał
RabbitMQ team
Reply all
Reply to author
Forward
0 new messages