Multiple verticles instances, routes and handlers

1,324 views
Skip to first unread message

Igor Spasić

unread,
Jun 25, 2015, 3:02:32 PM6/25/15
to ve...@googlegroups.com
vertx3.

I am deploying several instances of a server-verticle. each verticle instance creates http server. each verticle instance also creates a route to some path and register a new handler instance. Therefore, I have N verticles, N routes and N handler instances.

Now I fired two requests (and put the small sleep in the handler).  What happens is:

+ for each request different thread is used - this is obvious, perfectly fine with it.

+ however, in both cases the same instance of handler is used (didnt expect that). This one I didnt expect. When I look the code it looks like since I have N routes to same path, the first one in the list will be always matched, so other N-1 routes and handlers simply do not do anything.

Is this valid behavior? If yes, this means that i can create only one handler route and instance and share it among verticle instances??? please help.

Julien Viet

unread,
Jun 25, 2015, 3:52:49 PM6/25/15
to ve...@googlegroups.com, Igor Spasić
Hi,

can you share a minimal setup that reproduce this behavior so we can reproduce it and help you ?

-- 
Julien Viet
www.julienviet.com
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Igor Spasić

unread,
Jun 25, 2015, 3:54:05 PM6/25/15
to Julien Viet, ve...@googlegroups.com
sure.

Igor Spasić

unread,
Jun 25, 2015, 4:22:23 PM6/25/15
to ve...@googlegroups.com, igor....@gmail.com
So here it is:


see the ReadMe. BarVertx is dynamical vertx, that has to plugin into existing app in the runtime.

Please let me know whats wrong.

Julien Viet

unread,
Jun 25, 2015, 5:12:52 PM6/25/15
to ve...@googlegroups.com, Igor Spasić, igor....@gmail.com
you have this behavior because you share the same Router across all verticles and their respective http server (via the static AppServer.ref field): you need to change and create a Router per HttpServer in ServerVerticle. Then the BarVerticle should not exist, instead its functional logic should be merged in the ServerVerticle.

In a nutshell, you should have the entire setup of your Router in a single Verticle (ServerVerticle) and then deploy this verticle a certain number of times : Vert.x HttpServer sharing (same host/port) will take care of distributing the Http requests among the existing HttpServer.

HTH

-- 
Julien Viet
www.julienviet.com

Igor Spasić

unread,
Jun 26, 2015, 2:42:55 AM6/26/15
to ve...@googlegroups.com, igor....@gmail.com
Thank you very much - i needed this confirmation!

Tim Fox

unread,
Jun 26, 2015, 3:28:15 AM6/26/15
to ve...@googlegroups.com
+1. This question comes up quite often.

We should create an example that shows how to effectively scale their applications. I've quite a few example like this with people using way more classes than they need to.

Igor Spasić

unread,
Jun 26, 2015, 3:37:38 AM6/26/15
to ve...@googlegroups.com
Great!

What I personally would find useful, would be described scenarios in the documentation. For example:

scenario #1: shared routes, deployed 10 server verticles
Then explain what is in memory: e.g. 10 verticles have round-robin listeners on the same port; each route has it's route implementaiton in the memory, what is the max usage of the threads etc.
And then pros/cons - like: how this scales etc.
scenario #2: 10 server verticles, 10 wrking verticles...

and so on. in other words, maybe not need to spend time to kraft the examples; maybe this could be done in less time, and yet to give even more information....

Just an idea - dont mind :)

Alan

unread,
Aug 29, 2015, 3:29:43 PM8/29/15
to vert.x
Any examples yet to be published to help the community use the recommended vertx way to scale applications using multiple httpserver instances?

Thanks!

Mitch Walker

unread,
May 11, 2017, 12:05:47 PM5/11/17
to vert.x
Found this thread while researching the same thing.  Did an example ever come out of it? I havent found it yet.  My question is: Is there any benefit to multiple "primary" verticles if they all do the same thing vs single verticle instance and scaling the # of worker threads, which I believe vertx attempts to do automatically based on available cores.  The multiple verticles poses other interesting issues like coordinating a health_check handler for the entire process, concurrent request limiting, etc.

Thanks!

On Friday, June 26, 2015 at 12:28:15 AM UTC-7, Tim Fox wrote:

Jochen Mader

unread,
May 11, 2017, 12:17:30 PM5/11/17
to ve...@googlegroups.com
Vert.x will only use a number of threads equivalent to the number of deployed Verticle instances.

To unsubscribe from this group and stop receiving emails from it, send an email to vertx+unsubscribe@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/ec2f082c-7f81-48c9-84f1-082f546fc568%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Jochen Mader | Lead IT Consultant

codecentric AG | Elsenheimerstr. 55a | 80687 München | Deutschland
tel: +49 89 215486633 | fax: +49 89 215486699 | mobil: +49 152 51862390
www.codecentric.de | blog.codecentric.de | www.meettheexperts.de

Sitz der Gesellschaft: Düsseldorf | HRB 63043 | Amtsgericht Düsseldorf
Vorstand: Michael Hochgürtel . Rainer Vehns
Aufsichtsrat: Patric Fedlmeier (Vorsitzender) . Klaus Jäger . Jürgen Schütz

Mitch Walker

unread,
May 11, 2017, 1:05:04 PM5/11/17
to vert.x
Not sure that answers my question, and the docs, which I've read, do not jive with what you say.  At any rate, I dont want to have this go into a vertx threading discussion (which is interesting :)), instead, if there is a prescriptive example of how to scale, like it was requested/suggested, that would be good to know.

You:
Vert.x will only use a number of threads equivalent to the number of deployed Verticle instances.
 
Docs:
Instead of a single event loop, each Vertx instance maintains several event loops. By default we choose the number based on the number of available cores on the machine, but this can be overridden.

To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.

Tim Fox

unread,
May 11, 2017, 1:19:48 PM5/11/17
to vert.x
I'm not sure I exactly understand what the question is. I think things like these are always simpler when seeing an example.

Do you have an example of a situation where you want to scale something but it's not working the way you expect?

jklingsporn

unread,
May 12, 2017, 3:20:20 AM5/12/17
to vert.x
My question is related to this topic. Let's say, while a user is logging in, you need to signal other verticles and you publish a "login event" on the eventbus. This triggers the handlers of a logging verticle and some other verticles that need to perform tasks when a user logs in. Because all handlers need to listen on this particular event, you have to publish (and not send) this event on the EventBus. Now the question: how would I scale the handlers registered by the logging verticle and the other verticles? If I set the instance-count to N greater than one, then the "login event" would be handled N times by each handler, e.g. the "login event" would be logged N times. What I want though is that the event is handled by each (verticle) handler once, but in a round robin fashion. Is that possible?

Tim Fox

unread,
May 12, 2017, 3:39:30 AM5/12/17
to vert.x

On Friday, 12 May 2017 08:20:20 UTC+1, jklingsporn wrote:
My question is related to this topic. Let's say, while a user is logging in, you need to signal other verticles and you publish a "login event" on the eventbus. This triggers the handlers of a logging verticle and some other verticles that need to perform tasks when a user logs in. Because all handlers need to listen on this particular event, you have to publish (and not send) this event on the EventBus. Now the question: how would I scale the handlers registered by the logging verticle and the other verticles? If I set the instance-count to N greater than one, then the "login event" would be handled N times by each handler, e.g. the "login event" would be logged N times. What I want though is that the event is handled by each (verticle) handler once, but in a round robin fashion. Is that possible?

If you have multiple handlers registered to the same address, and you send (not publish) a message to that address, then one of those handlers will be selected in a round robin fashion.

Publish means "everyone gets to see it". Send means "only one person gets to see it".

Analogy would be like publishing a newspaper-  everyone gets to see it, versus sending a letter to a single person.

jklingsporn

unread,
May 12, 2017, 4:28:44 AM5/12/17
to vert.x
Yes, I'm aware of the difference. I'm wondering if my usecase is so uncommon. Maybe I have to explain it better:
Verticle A, B and C have each a handler registered for Event X. When X is send over the bus, all handlers should be triggered exactly once. How can I send X to all handlers (exactly once per verticle) while still being able to scale? To me it seems to be a mix of publish and send behavior which is right now not possible.

Tim Fox

unread,
May 12, 2017, 4:32:46 AM5/12/17
to vert.x


On Friday, 12 May 2017 09:28:44 UTC+1, jklingsporn wrote:
Yes, I'm aware of the difference. I'm wondering if my usecase is so uncommon. Maybe I have to explain it better:
Verticle A, B and C have each a handler registered for Event X. When X is send over the bus, all handlers should be triggered exactly once.

If verticle A, B and C each have a single handler registered for event X, and you want each handler to be triggered only once then a publish will do exactly that.

jklingsporn

unread,
May 12, 2017, 4:38:17 AM5/12/17
to vert.x
That's correct, but what if X is emitted faster, than A,B,C can process the event? My first attempt would be to increase the instance-setting to N in the DeploymentOptions for A, B and C, but that would just mean that X is handled N-times by A, B and C each time it is published.

Tim Fox

unread,
May 12, 2017, 4:58:24 AM5/12/17
to vert.x


On Friday, 12 May 2017 09:38:17 UTC+1, jklingsporn wrote:
That's correct, but what if X is emitted faster, than A,B,C can process the event? My first attempt would be to increase the instance-setting to N in the DeploymentOptions for A, B and C, but that would just mean that X is handled N-times by A, B and C each time it is published.

Yes because now you have N times as many handlers. Vert.x doesn't know "where" a handler is, at send publish time Vert.x sends it either to all handlers or one depending on whether you did publish or send.

In your case it seems you want to a) publish a "loginOccurred" event which can be received by different modules. Then in your module (e.g. your logging module), receive the loginOccurred event and b) *send* another event  that can be picked up by your individual logging workers".

However, I would ask whether it's really necessary to delegate logging to a set of verticle instances. Logging is almost always IO bound, not CPU bound (unless you're doing a lot of complex processing on the log entries) so scaling out verticles is unlikely to help with performance. I see it quite a lot where people design their app up front as a large number of different verticles, all individually scalable, when the truth is they'd get a much simpler app with fewer moving parts and better performance by using a smaller number of verticles and just calling your standard logging library directly.

BTW, I don't think this question is related to the original one on this thread which was about scaling routers, afaict ;)

jklingsporn

unread,
May 12, 2017, 5:59:06 AM5/12/17
to vert.x
Thanks for your answer and sorry to be somewhat offtopic ^^
Maybe logging was a bad example. But let's assume that the handlers are really CPU bound. Would the ideal solution be the one you described, e.g. fire another event Y via eventbus.send to a handler of Verticle D which has multiple instances? 

Tim Fox

unread,
May 12, 2017, 6:08:11 AM5/12/17
to vert.x
Yes, that would be one way of doing that. Note this just a classic messaging patterns (this is what I would call the "worker" pattern), nothing really Vert.x specific here. There are many more patterns that can be layered up on top of standard pub/sub and point to point messaging (which is what most messaging system, including Vert.x provide).

There are other ways of implementing the worker pattern to share the load in a better way between different workers (e.g. if one worker is slow, you might not want to do a straight round robin), e.g. have another component which you send the work too, which stores it in a queue, then workers request a piece of work, process it, then ask for the next piece. Or you might implement your own distribution algorithm.

ad...@cs.miami.edu

unread,
May 12, 2017, 12:04:12 PM5/12/17
to vert.x
Hi Group,

this is how I understand scaling an http server vert.x, just thought I would add my 2 cents:

Write one "primary" verticle which contains an http server + router.  You only need to write one of these.  But you deploy it with N instances.  Vertx will automatically handle the scaling and the load balancing (assuming you are happy with the default load balancing). 

Some care will need to be done when using executeBlocking within the verticle (think about how many worker threads you want, and if you want "ordered" or not).  Some care will also be necessary to choose the number of event-loop-threads, though the default will work well in most cases.  Multiple verticles can share the same event-loop-thread.

I don't use worker verticles (I prefer inline Execute Blocking), but I imagine the similar considerations for ExecuteBlocking + workerthreads.  I also don't use the EB, so no thoughts there...

Jochen Mader

unread,
May 13, 2017, 5:05:22 AM5/13/17
to ve...@googlegroups.com
Sounds about right.
Just one thing to point out:
executeBlocking relies on the base executor.
I strongly suggest using vertx.createSharedWorkerExecutor to create task-specific executors and do your sync operations there.
This will protect your application from one operation going bonkers with timeouts.

To unsubscribe from this group and stop receiving emails from it, send an email to vertx+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

ad...@cs.miami.edu

unread,
May 13, 2017, 6:12:45 PM5/13/17
to vert.x
>>executeBlocking relies on the base executor.

Is the base executor only used for "user" code in vertx or does Vert.x or other library use that executor for other reasons?

-Adam
Reply all
Reply to author
Forward
0 new messages