Does Streams have lower CPU only when segment count is low.

45 views
Skip to first unread message

Ankit Jain

unread,
Jan 6, 2026, 2:21:47 AM (3 days ago) Jan 6
to rabbitmq-users
Hi
we have 3 node rabbitMQ cluster (4.0.6), each node has its own dedicated vm.
we are seeing continues 90% CPU utilization..
Below are the stream configuration
1. x-stream-max-segment-size-bytes = 5,000,000 bytes ≈ 4.77 MB
2. x-max-age = 43,200s = 12 hours
3. 13.5 million retained messages  
4.  total number of segments = 21,542
5. Many streams have hundreds of segments
6. Some streams have 1,000–2,800 segments
7. total number of streams - 87

 
Does this settings can lead to HIGH CPU.. if so does Number of  Segments are the root cause.. If yes then should increasing the "x-stream-max-segment-size-bytes" to say 512 MB, will reduce the  number of segments.. but will increase the storage requirement...
Is the understand correct..
Can someone Help?


Karl Nilsson

unread,
Jan 6, 2026, 5:05:51 AM (3 days ago) Jan 6
to rabbitm...@googlegroups.com
yes most likely the CPU use is from the retention evaluation that has to scan through _all_ segments for a stream in order to perform truncation.

Why do you have such a small segment size? Very small segment sizes should only be used for very short stream, i.e. when consumers are only really interested in very recent messages. for longer streams, as you seem to have you must use larger segment sizes.


--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/rabbitmq-users/78a6b0b7-39cd-4b86-9481-d99a9ca64214n%40googlegroups.com.


--
Karl Nilsson

Ankit Jain

unread,
Jan 6, 2026, 6:17:36 AM (3 days ago) Jan 6
to rabbitm...@googlegroups.com
>>  Very small segment sizes should only be used for very short stream
What is the recommended period of retention when we say  very short stream.

In our use case yes  consumers are only really interested in very recent messages..but we also do not want to lose any messages... so considering the producer and consumer lag we kept the retention as 12 h.


Karl Nilsson

unread,
Jan 6, 2026, 7:21:41 AM (3 days ago) Jan 6
to rabbitm...@googlegroups.com
if the stream is relatively high throughput or contains largeish messages I think you want to do a back of envelope calculation on how much data you write in 12hrs and adjust segment sizing accordingly to keep total number of segments in the sub 100 range.



--
Karl Nilsson

Ankit Jain

unread,
Jan 7, 2026, 8:47:55 AM (2 days ago) Jan 7
to rabbitm...@googlegroups.com
Hi Karl,

Wanted to dynamically updated the size of the stream.. for that I executed below commands

1.   [root@rabbitmq-cluster-0 __dp_jobs-0_1750921405177172365]# rabbitmqctl list_queues name,type,policy | grep stream | grep dp.jobs-0 

dp.jobs-0 stream dp-job-stream-tuning

 2. [root@rabbitmq-cluster-0 __dp_jobs-0_1750921405177172365]# rabbitmqctl list_policies 

Listing policies for vhost "/" ... vhost name pattern apply-to definition priority / 
dp-job-stream-tuning ^dp\.jobs.*$ streams {"stream-max-segment-size-bytes":536870912} 0


3. rabbitmqctl list_queues name,type,arguments | grep stream | grep dp.jobs-0 

dp.jobs-0 stream [{"x-queue-leader-locator","least-leaders"},{"x-stream-max-segment-size-bytes",5000000},{"x-max-age","28800s"},{"x-queue-type","stream"}]  

Even though the default parameters while stream creation is not changed..but based on the policy attached... will the new parameters will take affect  for new segments or not




Reply all
Reply to author
Forward
0 new messages