--
You received this message because you are subscribed to the Google Groups "reactor-framework" group.
To unsubscribe from this group and stop receiving emails from it, send an email to reactor-framew...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
On Monday, December 16, 2013 at 10:27 AM, Laurent T. wrote:
--
On Friday, January 17, 2014 at 11:08 AM, Laurent T. wrote:
I'm following up on this thread as we're trying to go further than just a simple "Hello World" and are having some troubles understanding how all this works.First, here's what we're looking to achieve in the end:- A request comes in with it's payload- The payload is transformed- The transformed payload is sent to various processes asynchronously- The various results are combined to build our HttpResponse.- The response is sent back to the user.Performance is really important for us and we may even choose to abandon some results if they take to much time to come in (Like it's possible to do with the poll method of a BlockingQueue).
In our dev environment, we managed to bench a simple do-nothing-just-answer-OK implementation around 50k QPS.We then added a Thread.sleep(10) before returning the HttpResponse to simulate the time it would take to process the request.We went down to 100 QPS wich is what is expected for sequential processing of all the requests.I was a bit surprised as i thought requests would magically be parallelized by the TcpServer. I guess i missed something.
Could someone point us in the right direction ?We're having a hard time finding examples on how to use Reactor so if you have any tutorial or example we'd be happy to have them.
You probably want to have a Reactor or Stream that uses a ThreadPoolDispatcher if you're going to be doing blocking calls. That's the only way to get magical parallelism. :)
If doing a blocking call, you'll want to jump out of the event thread to a thread pool and make sure not to block on anything (by calling await() on a Promise or get() on a Future) but instead use callbacks.
On Monday, January 20, 2014 at 5:11 AM, Laurent T. wrote:
I actually thought after reading your answer that we would loose base performance switching to a threadPoolDispatcher. Shouldn't it be the case ?
Anyway, we then tested the Thread.sleep(10) and the results were much better: 800 QPS. We also did some variants of that test:- 10ms on 10% of the requests => 8k QPS- 50ms on 10% of the requests and 1ms on the other 90% => 1300 QPSWe do not yet know how fast we will process the data but that last one should be our worse case scenario.Our goal here is to reach 10k QPS per server. Right now those tests are done on a dev server that has 8Gb or RAM and 8 CPU cores.Do you have any information on how this should scale depending on the hardware ?
If doing a blocking call, you'll want to jump out of the event thread to a thread pool and make sure not to block on anything (by calling await() on a Promise or get() on a Future) but instead use callbacks.What do you mean by "jump out" ? Is this done by the configuration I set or should I do something more inside my Consumer or Function ?
I'm also wondering now what's the bast way to achieve the next step. Let's say i want to:- First decode POST data to get parameters for the following tasks- Then simultaneously, query a database and call an API- Finally group those two results (database and API) to form an HttpResponse
In the example of TcpServer you posted, there's a consumer that then maps the stream before consuming the HttpResponse that it sends back. You added the following comment:Use the Stream API rather than consuming messages directly.What does it really change ?
Also, what's the difference between map and consume ? I see you can do multiple map or multiple consume one after the other.
I just meant that if you're using the RingBufferDispatcher (either explicitly or by using the default) then you don't want to do any blocking at all since it's a single, shared thread. If you *are* doing blocking IO that's fine, you just need to make sure one of two things happens: you use the ThreadPoolExecutorDispatcher on the server so that events leave the Netty IO thread and go into your thread pool, or that you use the RingBufferDispatcher for some part of your processing but notify another Reactor that *is* using the ThreadPoolExecutorDispatcher (or you submit a job to a standard thread pool). The key is to get the blocking IO done in a thread other than the single RingBuffer thread powering the event stream.If using the ThreadPoolExecutorDispatcher in the server because it's probably the lowest-friction route, keep in mind that every notify() will submit to the thread pool. That means if you use a Stream and have 5 steps, you'll have (potentially) 5 context switches. That can add up. It's best to be conservative on the use of thread pools and use them as infrequently as possible and only cross the thread boundary when you absolutely have to--and then only do it once or twice.
On Tuesday, January 21, 2014 at 3:40 AM, Laurent T. wrote:
Thanks for all those details and the accuracy of your answers.I hope i'm not taking too much time out of you and that this topic will be useful to others.
I'm not sure i yet understand the concept of blocking IO vs non blocking IO and how this can affect my coding but for what i've been reading on it, it's actually quite interesting. If i understand this a bit i would say that, for instance, a database query is a blocking IO but reading POST data isn't. So let's take the case in which from time to time, we receive invalid POST data and we can know that just by looking at it and so need to answer a 400. In that case, would having a ThreadPoolExecutorDispatcher be less efficient than having a RingBufferDispatcher that answers the 400-s right away relays blocking IO-s to an other Reactor or Thread passing the TcpConnection.out() Consumer as a parameter ?
By the way, can i create as much Reactors as i want ?
What's the best way to share them with one another so they can communicate ?
Thanks again for your help in understanding reactor. You're proving invaluable.
--