remove does not nullify buffer item

100 views
Skip to first unread message

David Lilljegren

unread,
May 13, 2020, 7:52:33 AM5/13/20
to Conversant Disruptor
Hi,

When having a large DisruptorBlockingQueue I had some out of memory issues

I noticed that the take() will nullify the item stored in MultithreadConcurrentQueue.buffer[] before retruning, but calling drain wont nullify the items removed in  MultithreadConcurrentQueue.remove()

Thus the items linger in buffer and are never gc:ed until the buffer rolls

Best Regards
/David


John Cairns

unread,
May 13, 2020, 5:33:32 PM5/13/20
to David Lilljegren, Conversant Disruptor
Hi David,

Thanks for asking!   I’d like to understand your workflow a bit better before I answer.    Most people are spinning on an empty queue when they use Conversant Disruptor.   How many elements do you anticipate storing in the queue?   Are these elements very large?      How much size are you allocating for the queue itself?

Thanks!
John Cairns




--
You received this message because you are subscribed to the Google Groups "Conversant Disruptor" group.
To unsubscribe from this group and stop receiving emails from it, send an email to conversant-disru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/conversant-disruptor/0949c48e-2d3d-42a5-bad1-4747b8729075%40googlegroups.com.

David Lilljegren

unread,
May 14, 2020, 4:42:15 PM5/14/20
to John Cairns, Conversant Disruptor
Hi John,

Thank you for your reply

I use it as a buffer to store certain events that a worker thread is pushing to a DB

There are several threads producing the events

To get a bit of batching, when writing to the DB I first block on the take() method and then drain the queue

I had a very high capacity in the queue 200k elements cause my events can be a bit bursty and the writes to dB may be a bit slow at times

But once an element had been drained from the queue I don’t expect it to be lingering in the JVM

The events themselves are not very large but as I said I had a lot of them and was also running with the JVM max memory limited 

Actually for my use case it would be nice with a blocking drain ideally with a max limit

Best Regards
David




John Cairns

unread,
May 16, 2020, 10:03:50 PM5/16/20
to David Lilljegren, Conversant Disruptor
Thanks for the explanation David.   I understand your workflow and when working with large data pipelines I understand batching may make sense.   MultithreadConcurrentQueue is a very high performance queue intended to transfer data quickly between threads.   As such it is designed to be most efficient when it is small and typically empty.   By small, I mean that it would fit in your processors cache, 2000 entries, often less.

Also the drain method is intended to be a very efficient mechanism to pull everything out of the queue.   The intention is that it would boil down to a memcpy type operation.   In my performance testing the ‘nulling’ operation makes the drain essentially on par with the performance of using single item polling.   It would lose some of the benefit of the bulk operation.

I understand that is problematic for your use case.     I’d suggest reducing the size of the queue greatly and see if your application works as effectively.    Would you get similar performance from batching up 2000 or so elements at a time?

If your workload requires hundreds of thousands of elements, I’d consider using the LinkedTransferQueue provided in the JDK.    

I sure hope this helps. Please let me know how it works out,

John




Reply all
Reply to author
Forward
Message has been deleted
0 new messages