arranging a Disruptor with multiple consumers so that each event is only consumed once

1,770 views
Skip to first unread message

ben

unread,
Jul 26, 2012, 5:20:34 AM7/26/12
to lmax-di...@googlegroups.com
Hi, in the FAQ (http://code.google.com/p/disruptor/wiki/FrequentlyAskedQuestions), this question is answered by doing a lookup on mod(seqNum).

However how do you control the seqNum used in the 1st place? I thought they we allocated sequentially as messages came in using ringBuffer.Next().

thanks

Michael Barker

unread,
Jul 27, 2012, 7:43:45 AM7/27/12
to lmax-di...@googlegroups.com
> However how do you control the seqNum used in the 1st place? I thought they
> we allocated sequentially as messages came in using ringBuffer.Next().

I'm not sure I understand the question. Yes sequence numbers are
issued sequentially, therefore by virtue of the mod operation the
events will be allocated in a round robin fashion across the
consumers.

Mike.

Ben Mcmillan

unread,
Jul 27, 2012, 10:44:19 AM7/27/12
to lmax-di...@googlegroups.com
sorry I thought the purpose was to consume different types of messages with differerent consumers, but I assume the purpose is actually to distribute the same message types across multiple threads - is that correct?

Michael Barker

unread,
Jul 27, 2012, 10:48:12 AM7/27/12
to lmax-di...@googlegroups.com
On Fri, Jul 27, 2012 at 3:44 PM, Ben Mcmillan <ben.m...@gmail.com> wrote:
> sorry I thought the purpose was to consume different types of messages with
> differerent consumers, but I assume the purpose is actually to distribute
> the same message types across multiple threads - is that correct?

Yes.

Matthew Hall

unread,
Mar 3, 2014, 2:12:35 PM3/3/14
to lmax-di...@googlegroups.com

Hi Mike,

I understood the question and have the same question, but I'm not sure I understand this answer. :)

The FAQ appears to say, if we want Disruptor to distribute events across multiple consumers, we have to manually select which events to process in which consumer to avoid duplicate processing.

But this is opposite my usual understanding of the operation of ring buffers. Normally, a multi-consumer ring buffer is set up so only one of the consumers can claim a particular event or batch of events in the event sequence, which seems to be the opposite of what the FAQ says.

Can you clarify a little more about when the modulo operation should and shouldn't be used?

Thanks,
Matthew.

 

Rajiv Kurian

unread,
Mar 3, 2014, 10:44:48 PM3/3/14
to lmax-di...@googlegroups.com
The ring-buffer architecture in the Disruptor project shines in multicast scenarios, where each event needs to be processed by multiple consumers. The sequence barriers and the DSL provide for an elegant way to specify pipelines composed of parallel and serial steps. This kind of application is what it is optimized for (as opposed to generic queues).

If you want to divide work between multiple consumers there are two general ways:
  1. Use a multi-consumer ring-buffer. Like you said, we have to manually select which event to process on each consumer thread. For example if we had two consumers, one could process the even ones and mark the odd ones as processed without actually handling them, while the other one could process the odd ones.
  2. Create a SPSC ring-buffer per consumer connected to the producer (assuming single producer). The producer thread then decides how to distribute events across multiple consumers.
AFAIK there is no SPMC/MPMC kind of facility where multiple consumers take a chunk of events from the ring-buffer and process them. Every consumer "processes" every event.

Michael Barker

unread,
Mar 5, 2014, 3:37:40 PM3/5/14
to lmax-di...@googlegroups.com
Hi Matthew,

I think you are confusing ring buffers with queues.  The behaviour you describe is typical of a queue and the main feature that distinguishes the Disruptor from a queue is it's multicast behaviour.  The other option would be to use the WorkerPool.  However, my recommendation is that if you need multiple consumers to behaviour like a queue or executor would, then I would suggest looking for a fast implementation of one of those instead.

Mike.


--
You received this message because you are subscribed to the Google Groups "Disruptor" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lmax-disrupto...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Matthew Hall

unread,
Mar 5, 2014, 3:48:28 PM3/5/14
to lmax-di...@googlegroups.com
Hi Mike,

Thanks for the clarification. I didn't confuse ring buffers with
queues, I confused things a bit further down the line. I have worked
before with SPMC and MPMC ring buffers, but the ones I used (in C)
only allowed one Consumer to pick up any given chunk of events,
because they were used for packet processing and you don't want
packets to be processed in duplicate multiple times.

Thus I wasn't used to the Disruptor's multicast use of the ring
buffer, as this is somewhat different from the more classical ring
buffer uses for handling packets.

In my case, I found it ran faster using the classic ThreadPoolExecutor
with a spinlock on the input for rate control when the queue fills up,
than it did using the WorkerPool. Of course, this makes sense, since
your code optimizes for multicast, and I wanted round-robin or some
other load-balanced method instead of multicasting.

Matthew.

Michael Barker

unread,
Mar 5, 2014, 3:58:40 PM3/5/14
to lmax-di...@googlegroups.com
In my case, I found it ran faster using the classic ThreadPoolExecutor
with a spinlock on the input for rate control when the queue fills up,
than it did using the WorkerPool. Of course, this makes sense, since
your code optimizes for multicast, and I wanted round-robin or some
other load-balanced method instead of multicasting

I'm not surprised, WorkerPool hasn't received a lot of attention and optimisation and is unlikely to in the future as we don't have a use for it at LMAX.  I know there others working on fast queues and executors when they become public, I will probably deprecate the WorkerPool.

Mike.
Reply all
Reply to author
Forward
0 new messages