NSQ bechmarking - Memory footprint too high

314 views
Skip to first unread message

Ashish Goel

unread,
Feb 27, 2016, 6:20:36 PM2/27/16
to nsq-users
I am seeing a weird behavior while benchmarking Nsq. Here is the testing environment - 

Number of nsq instances - 1
Message size - 1MB
Number topics - 1
Number of Channels - 1

I am using ephemeral channel for the testing with the default mem-queue-size of 10000. In the worst case I would assume a memory foot print of ~10GB (1Mb message size * 10,000 messages).

But while benchmarking, I observed Nsq is using as high as 40GB of memory. nsq_stats on that topic and channel shows 10,000 as depth and no backend depth(which is expected for ephemeral channel). Now if I stop the load, the depth goes down to zero but nsq process is still holding the memory. It doesn't releases it. 

This doesn't indicate a memory leak because if I start the load again for the max memory depth, Nsq is still using the same 40GB of memory. Two things come to my mind - 
1. Nsq is not releasing the buffers but reusing them.
2. It is maintaining duplicate copies of the message somewhere which indicates a 40GB memory footprint vs expected 10GB.

I tested it with different nsqd versions including the latest.

Thanks,
Ashish

Matt Reiferson

unread,
Feb 29, 2016, 10:13:59 AM2/29/16
to Ashish Goel, nsq-users
Hi Ashish,

This is a Go thing, because it's garbage collected, it will allocate a proportionately larger heap than what "expected" size would be (heap grows up to a configurable max and then a GC runs reclaiming memory).

During the lifetime of a process, you also won't typically see memory reclaimed by the OS, as it's usually not worth it.

Hope this helps.

Ashish Goel

unread,
Mar 4, 2016, 12:26:48 AM3/4/16
to nsq-users, ashish.ku...@gmail.com
Thanks Matt for the quick response. Makes sense. I noticed that the memory is reclaimed by the OS after 15-20 minutes.

Does this makes Nsq use an unbounded memory? We use the default mem-queue-size of 10,000. But even with this limit, if our clients can create new topics on the fly(Doesn't happen too often but it is allowed), this makes it impossible to bound the Nsq memory footprint by a fixed size in bytes. Is there any workaround for that? 

As I was thinking through it, I thought why not set the threshold to 1 so the operations are not memory bound. I load test the use case with mem-queue-size of 1, and killed the consumer but this reduces the performance of nsq by a big factor(from an avg. 3-5 ms to 250-300ms for 1MB payloads). So I believe I need to go back to using some memory buffer and make sure the consumers drain the queue at the same rate. I am not sure if this is the right thread to start a discussion on this. But is this expected?

Matt Reiferson

unread,
Mar 4, 2016, 9:35:08 AM3/4/16
to Ashish Goel, nsq-users
It can't be strictly controlled by size in bytes, but the overall "active" footprint can be estimated by number of topics/channels * mem-queue-size * avg-size-of-msgs.
Reply all
Reply to author
Forward
0 new messages