Hey Gesly!
Very good question. I'm actually working on a branch to explore the follow up for dispatchers and broadcasters in 2.1. One thing sure is that the focus is going to be even more on the reactive streams artifacts.
The differences now:
- The processor is bare metal disruptor (to evolve with more options) with reactive streams semantics, no recursive handling and connected in a pipeline directly to a possible upstream subscription opening the option to never block. It also allocates dedicated resource (thread) to each subscriber until they complete, error or cancel.
- The dispatcher has no backpressure control, supports recursive call to itself and can be shared by many stream (being bound to no one and executing the Consumer task passed on alongside the data signal). It's main problem results from the fact it can be shared, if many stream or something else reference and use the same dispatcher, there will be contention. Some optimizations are not present in their implementation and 2.1 will also fix that by rationalizing under the same base, the processor.
- The broadcaster runs on a dispatcher, also implements processor and provides for a registry of subscribers allowing publish/subscribe with any dispatcher (except the ones that are not serializing event passing, meaning concurrent calls to downstream onXXX methods which is not allowed by the spec).
For a large number of subscribers, Dispatchers and eventually Broadcasters are still the best bet for now since they won't take a specific thread.
The possible 2.1 evolution will be to use at least Subscriber<Consumer<T>> for most of the dispatchers which internally will use a Processor<Task> where Task carries on eventually the origin Subscription. That means a Stream dispatching on this new component will pass its Subscription in addition to the task to run (which is triggering downstream usually). Keeping track of the number of components referencing this new "dispatcher", it will be possible to request a share of the available capacity and dramatically reduce contention by providing a fair distribution of executed tasks for each subscription.