Re: The project looks really interesting

11 views
Skip to first unread message

Jin Mingjian

unread,
Jul 19, 2015, 8:56:30 PM7/19/15
to Rajiv Kurian, la...@googlegroups.com
Hi, Rajiv, recently I re-check the sources of Disruptor. The Disruptor's way is equivalent to GenericHyperLoop + user maintained object array essentially. And sequentially, Disruptor can not guarantee the 100% thread safety. But do not panic, it can achieve a "practical" safety I called.

On Sun, Feb 2, 2014 at 2:25 AM, Rajiv Kurian <geet...@gmail.com> wrote:


On Saturday, February 1, 2014 4:06:36 AM UTC-8, Jin Mingjian wrote:
Hi, Rajiv, you are almost there:) thanks for feedback. In fact, this design is to return control of object to the user. That is, the user decides where the sent object is from. You can cache/pre-allocated your objects. If you want send primitive type value, you can use LongHyperLoop or IntHyperLoop. The slot (of internal buffer) is needed in both Disruptor and HyperLoop(RingBUffer). The Disruptor use a setVaule(Object) to populate the slots. In fact, they are the both basically same idea in core because I investigate the Disruptor carefully.   The Disruptor has other helper class, and may provide a little more functionalities. So, Disruptor has more garbage side objects.
 
 

You can think like this:  You use the Disruptor to transfer the messages from one to another, the message highly possibly needs to be generated on the fly(or say, not likely pre-existed). Then, all the messages in Disruptor is garbage except primitive type values. For primitive type values/messages, landz has specialization version as my mentioned above. LongHyperLoop or IntHyperLoop are garbage free. If you truely like reuse on the fly messages, you can use pre-allocated array or pool. 

The disruptor will work without any allocation as long as the classes themselves are composed of primitives or other classes that are composed of a static number of primitives. For example if you had a class like:
class SampleEvent {
  long a,b,c; 
  Container d;

}

class Container {
  int a, b, c;
  byte msgId;
}

The problem arises when you have events with a dynamically sized members. For example: ByteBuffer, String etc. And in that case you have to implement your own pooling. The get method provides a perfect interface to reclaim objects back on the producer side and put them back on the pool. This means your pool can be single threaded and completely on the producer side, where it should be.

You say that users can implement their own pooling as desired, but the only interface I see to get objects back from the GenericHyperLoop is receive, tryReceive etc. These methods update the cursor and need to be called on the consumer thread, which means that if you want to put these objects back on a pool, your pool needs to be multi-threaded with getObject called on the producer thread and releaseObject called on the consumer thread. In the Disruptor when the consumer says that they have processed an entry this information will eventually reach the producer through the cursor info. The producer can use this information to reclaim objects on it's own thread without an additional multi-threaded pool implementation. Example: https://github.com/RajivKurian/Java-NIO-example/blob/master/src/main/java/com/rajiv/rbutils/ResourceCollectorRingBuffer.java

Maybe I am missing something though.

Jin 




On Sat, Feb 1, 2014 at 4:33 PM, Rajiv Kurian <geet...@gmail.com> wrote:


On Saturday, February 1, 2014 12:32:17 AM UTC-8, Rajiv Kurian wrote:
I've been going through the source and there is a lot of stuff to see.

The style of the code seems a very C style dealing in primitives wherever possible instead of objects. I see you are handing out addresses (longs) from alloc instead of byte buffers etc. This seems great for perf, though I wonder how different it is from writing C/C++ straight up ;)

Another thing that I noted was your ring buffer implementation is fulfilling more of a queue interface. It accepts new T objects instead of reusing new ones. How does that compare to the disruptor where you get an old slot and just change the values on an old object? The queue approach seems like it would create more garbage.
meant to say "instead of reusing existing ones"

Have you measured Java's NIO performance VS your wrapper over the epoll syscalls? Curious to know what in the Java implementation turned you off? Are the JNI overheads justified for the extra perf gain?

I am sure I'll have more questions as I read the source. Thanks again!




On Saturday, February 1, 2014 12:12:29 AM UTC-8, Jin Mingjian wrote:

thanks, Rajiv, the link is right. it is my mistake to forget to attach the  url. I am just recovered from a cold, and in a half-vacation state. but i will in partial time.

2014-2-1 下午3:11于 "Rajiv Kurian" <geet...@gmail.com>写道:
I bounced off your post on the mechanical sympathy list.

I read the page at http://landz.github.io/ to figure out the aim for the project. Would it be correct to say landz aims at providing more of a library approach without prescribing a style? The users are free to mix and match components like the ring-buffer/queue implementations, general purpose memory allocator, the network and IO facilities to build their own applications?

Excited to see how the project shapes up! Thanks for sharing.


--
You received this message because you are subscribed to the Google Groups "landz" group.
To unsubscribe from this group and stop receiving emails from it, send an email to landz+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "landz" group.
To unsubscribe from this group and stop receiving emails from it, send an email to landz+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "landz" group.
To unsubscribe from this group and stop receiving emails from it, send an email to landz+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply all
Reply to author
Forward
0 new messages