low latency flexible thread messaging

642 views
Skip to first unread message

Vlad Ilyushchenko

unread,
Aug 26, 2016, 8:56:18 PM8/26/16
to mechanical-sympathy
I would like to collect feedback on messaging system that I wrote. I put together introductory blog post here: http://blog.questdb.org/2016/08/the-art-of-thread-messaging.html for those who might be interested. 

The design goals were:

- have queue processing decoupled from thread to enable threads do other things if queue processing is impossible
- have clean, simple, consistent API without need to implement interfaces, have callbacks etc.
- have great flexibility of building processing pipelines
- have low latency

The API is aimed as CEP systems, like those found in algo trading, pricing etc. Performance is same as Disruptor.

Any feedback would be gratefully received.

Richard Warburton

unread,
Aug 29, 2016, 4:47:47 PM8/29/16
to mechanica...@googlegroups.com
Hi,
Thanks for sharing, it's always interesting to see people experimenting with different libraries. I was quite surprised that you chose to make the queue unbounded, can you explain why?

regards,

  Richard Warburton

Vlad Ilyushchenko

unread,
Aug 29, 2016, 7:17:41 PM8/29/16
to mechanical-sympathy
Hi Richard,

Queue itself is circular, backed by bounded array. What's unbounded is sequence. Same idea as used in disruptor.

ymo

unread,
Aug 30, 2016, 2:34:01 PM8/30/16
to mechanical-sympathy
From an api point of view the chaining api is just superb ... )))

Vlad Ilyushchenko

unread,
Aug 30, 2016, 3:12:40 PM8/30/16
to mechanical-sympathy
I hope you are not being sarcastic :) the api is certainly divisive... 

Michael Barker

unread,
Aug 30, 2016, 4:44:22 PM8/30/16
to mechanica...@googlegroups.com
Having the sequence returning negative values is something that I've been planning for the next version of the Disruptor.  Returning a failure on a failed CAS is an interesting one - I've been trying to decide myself whether to do the same thing.

Mike.

On 31 August 2016 at 07:12, Vlad Ilyushchenko <blues...@gmail.com> wrote:
I hope you are not being sarcastic :) the api is certainly divisive... 

--
You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+unsub...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

ymo

unread,
Aug 30, 2016, 10:11:41 PM8/30/16
to mechanical-sympathy
No Serious. I was so disgusted in java internal DSLs that i went on learning about kotlin .. and code generation ((( you gave me some hope that it can be done !  
I am also not sure how you pulled it off but being able to subscribe/unsubscribe on the fly while keeping the same fluent api is super cool ... to say the least !  I don't think you can subscribe/unsubscribe on the fly to a "queue" on the disruptor (last time i checked) or any other queues out there that i know about.

Michael Barker

unread,
Aug 30, 2016, 11:17:18 PM8/30/16
to mechanica...@googlegroups.com
Technically you can subscribe/unsubscribe while the Disruptor is running, but it is not available in the DSL.

Mike.

--

Vlad Ilyushchenko

unread,
Aug 31, 2016, 9:12:17 AM8/31/16
to mechanical-sympathy
Having sequences fail fast is handy for having work scheduling that can take advantage of it. I feel like another blog post coming up ...

Sean Allen

unread,
Sep 11, 2016, 10:59:46 AM9/11/16
to mechanica...@googlegroups.com

On Fri, Aug 26, 2016 at 8:56 PM, Vlad Ilyushchenko <blues...@gmail.com> wrote:
Performance is same as Disruptor.

Vlad,

This looks really interesting. Looking at the benchmarks though, I don't see how it is the same as Disruptor, I assume this is because I'm reading the benchmarks incorrectly, based on the benchmarks at the end of the post, could you explain them and how the results for QuestDB and Disruptor are equivalent?

Vlad Ilyushchenko

unread,
Sep 12, 2016, 7:47:17 PM9/12/16
to mechanical-sympathy
Hi Sean,

I benchmarked two solutions to same problem. QuestdbFanOut and QuestdbWorker. QuestdbFanOut is exactly equivalent to Disruptor setup as far as partitioning work evenly between consumer threads. These threads have visibility of all queue items but they skip over "foreign" partition items. QuestdbFanOut benefits heavily from batching, just as does Disruptor. Difference in benchmark score is splitting hairs really.

QuestdbWorker is pure worker implementation as far as worker consumers don't necessarily process same number of queue items. It is slower due to constant interaction with memory barriers and is at heavy disadvantage in this particular benchmark because it can't benefit from batching. Despite that workers can be useful when queue item processing cost is non-uniform.
Reply all
Reply to author
Forward
0 new messages