Deadlock when disruptor's executor has less threads than there are workers in the pool.

330 views
Skip to first unread message

Miljan Markovic

unread,
Sep 19, 2013, 5:35:17 AM9/19/13
to lmax-di...@googlegroups.com
Hi everybody,

I tried to make a disruptor with 3 worker pools, each having 4 workers. Pools 1 and 2 could proces events in parallel while pool 3 followed them. The JUnit test java file is in the attachment.

First I initialized the disruptor's executor with 10 threads, and the disruptor could not shutdown (the shutdown() method never returned). Afterwards, I set an executor with 12 threads and everything worked fine (the shutdown method properly returned).

It seems that I need to have as many threads in the executor as there are handlers (eventHandlers + workHandlers) in the disruptor, which is a waste of resources if there are handlers that are just pass-through for some events (they have nothing to do on them). From inspecting the code, I can see that BatchEventProcessor and WorkProcessor run in one thread and hold it till they finish, which is fine, but why would each WorkProcessor need a separate set of threads for each of his workers? The above example could easily be executed with 3 + <worker threads number> threads (>3 and  <12).

Is there any specific reason for this kind of WorkProcessor behavior?

I'm using disruptor v. 3.2

Thanks,
Miljan

MultipleWorkerPoolsTest.java

Jason Koch

unread,
Sep 19, 2013, 5:55:31 AM9/19/13
to lmax-di...@googlegroups.com
Hi Miljan

This is not a surprise and is by design. Each EventHandler is allocated a thread from the ExecutorService you provide. It is not used as a classic executor with each event submitted to the executor.

I may misunderstand, but I can't see the point in having less threads than handlers. What behaviour do you expect? If you expect the events to wait inside the WorkerPool until a thread to run the worker becomes available, then this would stall events. In this case, why not just configure less handlers in the WorkerPool as you would have the same outcome?

With respect to waste of resources - if your handlers are passing over an event very quickly, they will consumer very little CPU especially if configured with a blocking wait strategy. I don't think this would be a significant drain on your performance if configured appropriately.

Thanks
Jason





--
You received this message because you are subscribed to the Google Groups "Disruptor" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lmax-disrupto...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Miljan Markovic

unread,
Sep 19, 2013, 11:00:50 AM9/19/13
to lmax-di...@googlegroups.com
Thanks, I see it more clearly now.

My concern with resources was in the case where you have mutiple io bound worker pools running in parallel where only one of them actually performs useful work on some event (based on event type), but I can always refactor that to be a single worker pool where workers do different things based on the type of event.
Reply all
Reply to author
Forward
0 new messages