I am seeing a weird behavior while benchmarking Nsq. Here is the testing environment -
Number of nsq instances - 1
Message size - 1MB
Number topics - 1
Number of Channels - 1
I am using ephemeral channel for the testing with the default mem-queue-size of 10000. In the worst case I would assume a memory foot print of ~10GB (1Mb message size * 10,000 messages).
But while benchmarking, I observed Nsq is using as high as 40GB of memory. nsq_stats on that topic and channel shows 10,000 as depth and no backend depth(which is expected for ephemeral channel). Now if I stop the load, the depth goes down to zero but nsq process is still holding the memory. It doesn't releases it.
This doesn't indicate a memory leak because if I start the load again for the max memory depth, Nsq is still using the same 40GB of memory. Two things come to my mind -
1. Nsq is not releasing the buffers but reusing them.
2. It is maintaining duplicate copies of the message somewhere which indicates a 40GB memory footprint vs expected 10GB.
I tested it with different nsqd versions including the latest.
Thanks,
Ashish