Is there a way to do strict round robin with event bus?

331 views
Skip to first unread message

javadevmtl

unread,
Apr 29, 2013, 5:57:08 PM4/29/13
to ve...@googlegroups.com
Using vertx 1.3.1

In the docs the following is stated.

"The event bus supports point to point messaging. Messages are sent to an address. This means a message is delivered to at most one of the handlers registered at that address. If there is more than one handler regsitered at the address, one will be chosen using a non-strict round-robin algorithm."

I use the main app pattern where one verticle deploys everything...

in my start()

container.deployModule("MYJDBC-1.0", jdbcConfig-1);
container.deployModule("MYJDBC-1.0", jdbcConfig-2);
ontainer.deployVerticle("MYMainHTTPVerticle.java", webConfig, 32);

I execute my load test and when I look in my database the count of the table is off by a couple thousand. So this seems liek the "non-strict" part.

But here is the kicker, if I deploy the 2 JDBC modules on 2 separate hosts and the main verticle on a 3rd host and cluster everything the count on table entries is exactly split down the middle and this repeatable.

Is there a way to make it strict round robin all the time?



thir...@gmx.net

unread,
Jun 29, 2013, 9:48:43 PM6/29/13
to ve...@googlegroups.com
I've asked myself a question that probably points in the similar direction...

How many cores do you have? Since you're deploying 32 webserver verticels I'm inclined to think 32. But what about the two workers (JDBC modules)? They run with the "background thread pool" but where do the threads in that pool get their time from? And how does that affect the behaviour of the servers. I would think that in any case it seems like you should reserve "some" cores for worker threads...

So I'm suspecting it has something to do with how vert.x schedules the servers and workers (i.e. their event bus handlers) on the different cpus/threads... But then I have no idea what I'm talking about, and maybe someone could clarify on how the hybrid thread model stuff plays out.

John Smith

unread,
Jun 29, 2013, 10:55:25 PM6/29/13
to ve...@googlegroups.com

If you want true strict round robin register the handlers to different addresses and use an atomic int to increment with modulus to pick the right address. I done this and it works pretty good. I took it a step further and created a hazlecast clustered index module so I can increment the index cluster wide.

--
You received this message because you are subscribed to a topic in the Google Groups "vert.x" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/vertx/2_TRSmmxoKo/unsubscribe.
To unsubscribe from this group and all of its topics, send an email to vertx+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

stream

unread,
Jun 29, 2013, 11:43:14 PM6/29/13
to ve...@googlegroups.com

Well
i think this is the detail about eventloop and background thread.

all i have know , it that eventloop be scheduled in some rate,the order is up to your code,
(i.e the order of handler in nested). the eventloop number it up to your core of machine.
one handler take one  eventloop thread at one time.

and background thread be scheduled in order. it mean at one time there are just one thread
be executed for avoid thread competition. even your have 32 core.

ok i guess you thought what have happened if there 32 core with 32 webservers and 2 worker.
exactly there are two thread pool be created . one is event pool with size of 32
another is work-thread pool with size of 20 (default size)..

since they doesn't  directly make sync in your handler, so you can images they run parallel.
well , one of core's usage maybe higher than other as it do some thing with work's thread.
but only one core. since worker is scheduled in order.

You can make your worker run with multi-thread in vert.x 2.

i hope someone could correct me if i make some wrong idea..
thanks.


-- 
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email tovertx+un...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages