Architecture of server with multiple REST endpoints in one fat jar

1,163 views
Skip to first unread message

Dominic Rübelzahn

unread,
Sep 22, 2016, 4:49:30 AM9/22/16
to vert.x
Hello,

we are adjusting our architecture to avoid sharing a router between different REST verticles as recommendet here: https://github.com/vert-x3/vertx-web/issues/378. It was called an anti pattern and we can't use the scaleability of vertx with our currect implementation.

Sadly we have some limitations by operation whereby we don't see a good way to implement it as recommended. Our limitations are (and can't be changed):
* one system to deploy and run everything
* one artefact that can be deployed and started
--> mirco service architecture as described in http://vertx.io/blog/vert-x-blueprint-tutorials/ sadly not possible

Right now we have implemented:
* 10+ REST services, increasing
* 20+ other verticles, accessable via event bus
* 1 main verticle which starts the http server and deploys all other verticles
* 1 router which is shared between all REST verticles

To avoid anti pattern and to make use of scaleability we are figuring out how to solve that. We changed the implementation so that every REST verticle has it's own server. But when the whole application is started only one REST endpoint is available. Seems like you can't have different routes with a shared server, right? The question is how to do it then? What are your recommendations?

So far I see following possibilities:
* leave everything as it is, even we have implemented an anti pattern and can't have the benefit of scaleability for the REST endpoints --> not the one we prefer
* setup multiple http server, each has it's own router with all REST endpoints deployed in. It's a 1-1 connection then between server and router.
* implement REST endpoints as handlers --> feels wrong, so no

What do you recommend?

Greetings,
Dominic


Jochen Mader

unread,
Sep 22, 2016, 5:49:21 AM9/22/16
to ve...@googlegroups.com
You should be able to create a WebServer in each verticle running on the same port. Vert.x will take care of it and only launch one instance for the given port and distribute requests according to your handlers.

--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+unsubscribe@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/c9b9b644-fe94-4a70-8373-e5e1bebeb9cb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Jochen Mader | Lead IT Consultant

codecentric AG | Elsenheimerstr. 55a | 80687 München | Deutschland
tel: +49 89 215486633 | fax: +49 89 215486699 | mobil: +49 152 51862390
www.codecentric.de | blog.codecentric.de | www.meettheexperts.de | www.more4fi.de

Sitz der Gesellschaft: Düsseldorf | HRB 63043 | Amtsgericht Düsseldorf
Vorstand: Michael Hochgürtel . Mirko Novakovic . Rainer Vehns
Aufsichtsrat: Patric Fedlmeier (Vorsitzender) . Klaus Jäger . Jürgen Schütz

Deven Phillips

unread,
Sep 22, 2016, 10:18:44 AM9/22/16
to vert.x
You could, instead of starting multiple HTTPServer instances, use nested Router instances The concept is explained at: http://vertx.io/docs/vertx-web/java/#_sub_routers

I hope that helps!!!

Deven

Dominic Rübelzahn

unread,
Sep 22, 2016, 10:28:16 AM9/22/16
to vert.x
Hello,

thank you for your answers so far, but both have some limitations:


You should be able to create a WebServer in each verticle running on the same port. Vert.x will take care of it and only launch one instance for the given port and distribute requests according to your handlers.
--> This is what we tried. It works perfectly if you only deploy one REST verticle or just multiple instances of one REST verticle. If you deploy different REST verticle then only one REST endpoint is available.

You could, instead of starting multiple HTTPServer instances, use nested Router instances The concept is explained at: http://vertx.io/docs/vertx-web/java/#_sub_routers
--> This is something we already do, but the problem is that there is only one server which limits the scaleability. Also I don't see a big difference if I create a subrouter for each REST verticle compared to just have one which is shared. Please correct me if I am wrong!

Greetings,
Dominic

Tim Fox

unread,
Sep 24, 2016, 3:07:52 AM9/24/16
to vert.x
I am really struggling to understand what the problem is on this thread.

Krzysztof Kowalczyk

unread,
Sep 24, 2016, 8:21:58 AM9/24/16
to vert.x
Hi Dominic,

TLDR: endpoint = handler, verticle = deployment unit, not a tool to define modules, never share something across verticles directly ( the instance of an router) unless you really know why you are doing that (i.e. db client might be a good to share if it is thread safe).

I am not sure if I understand your problem, but I see a familiar pattern here. So I will assume that your problem is a form of "too many verticles" design flaw. Don't worry it seems that most new apps in vert.x starts there (at least mine did). 

My assumptions:
- you have one machine or all machines run the same verticles
- you have about 30 verticles running in single JVM
- your services are more endpoints than services or "nano" services - conceptually they are not separate paths

I might be wrong and my answer might be irrelevant. 

So to start, do you have a machine with over 30 CPUs? I guess not. So why would you create 30 verticles?

Every single verticle has its own thread with event loop. If you have many verticles, you have more threads than CPU can handle without switching back and forth between threads. If we can stop switching context == keep a number of threads low - we have a significant gain in performance. If we can't we just waste a lot of time on https://en.wikipedia.org/wiki/Context_switch . To be able to process many requests we need some blocking threads for IO but actual processing should be async and run on event loops. Such architecture allows having much higher throughput than classic servlet == thread. 

So we want to keep a number of threads low and be able to scale at the same time. 

How? It is actually pretty simple. I would say that what you need by default is One Verticle. Unless there is a Very Good Reason not to have more than one. Most of the time the reason is not very good. Good is not enough, it must be very good IMHO. You might want to use separate verticles to deploy completely separate services, that don't share ports nor code, to deploy different services in single JVM. But this is rather for a near service - an app deployed on someone's PC, phone, raspberry pi at home, not a service which runs on a server and is exposed to internet scale traffic, as this would be slow (where slow is something that can't respond in < 10ms, it still will hit Doherty threshold).

If you have one verticle you can easily scale it up just by increasing number of its instances. Because the system is homogenous it is trivial. There is no problem with ports or shared router. The router is not shared, but the HTTP server you create can use the same port. In normal cases, it would also be faster as you will have a lower number of threads and no context switching.

Threat Verticle like a war - it is your deployment unit, not a way to modularize your app. You modularize your app with handlers, classic OOP, Guice, OSGi...  


Depending on how you develop the services/endpoints (one team, many teams), how big they are etc. you might want to do different things. 
To show to different options one would be simple handlers another would be kind of di.

Handler based:
- all routeing is being set in one main verticle and for every route, a handler is provided
- handler does not know it's path, the verticle does
- one can use sub routes to better manage complexity
- this way is simple and can be taken quite far, with good oop skills it would also be really easy to read, but you need to now all services up front

And on another side of the spectrum 

DI based:
- you have a generic verticle that deploys a server and run trough a list of route configurations in order
- a configuration takes a router and registers all needed handlers (it knows it's path)
- the route is injected to verticle trough Guice, CDI, spring or custom mechanism
- the DI tool finds all possible services that you want to run and injects it to the verticle
- this way you can develop every single service as a separate project put it in jar and deploy in one verticle

Be aware of threads you have, not only verticles create threads, some db clients will create their own thread pools, you don't want to have too many threads.

Hope that helps,
Krzysztof

Jez P

unread,
Sep 24, 2016, 9:01:33 AM9/24/16
to vert.x
"Every single verticle has its own thread with event loop." -> this is not true. Every single verticle instance is tied to a single event loop thread. However, multiple verticle instances may be tied to the same event loop thread. In other words, 30 verticle instances is perfectly fine. By default, IIRC, vert.x creates 2*no of cores event loop threads, so if you have 4 cores you would expect 8 event loop threads. If you had 30 verticle instances, they would be served by those 8 event loop threads, not by 30 threads.

However, a key part of your message is correct: by keeping non-blocking processing on the event loop, in general you are minimising number of threads and therefore amount of switching. Where you have blocking i/o then that needs to be taken off the event loop threads to keep processing happening cleanly and avoid effective starvation of the event loop threads (a bad thing). 

Krzysztof Kowalczyk

unread,
Sep 24, 2016, 10:18:14 AM9/24/16
to vert.x
Thanks Jez, 

I did not know there is an upper limit. Life long learning ;)

The main points still stand (or I will learn even more today):
- if one need one port, then there should be one verticle with many instances if needed. It could offload work to other verticles but I don't see a point of doing that and it would make startup and communication complex. Rx, jms, db clients have their own threads and need to be kept in check to achieve optimal performance.  
- check your threads, force custom limits. I've seen vertx project with 200 threads because of various developer errors, but even if everything is ok, then we might have more threads than needed, 4 core machine, give 8 threads but jms would take some, db would take some (4 for instance) and if one reach the vertex limit as well then one will bhave more threads than optimal. Take data driven decision - test with gatling or jmeter.

Regards,
Krzysztof

Jez P

unread,
Sep 24, 2016, 10:30:23 AM9/24/16
to vert.x
It's not an automatic upper limit, I think it's configurable, But it does avoid proliferation.

I agree with you - one verticle with many instances. The router is not the verticle. Dominic, what's the problem with one instance of the router per verticle, with all instances being identical? Right now you only have one http server verticle - why is there a problem with multiple instances of that verticle?

Jez P

unread,
Sep 24, 2016, 10:32:36 AM9/24/16
to vert.x
And yes your main points still stand, it's important to be aware of the non-eventloop threads as they can add quite a lot of additional threads and some context switching. So it's x event loop threads by default, but that doesn't include worker pools (for executeBlocking calls), rx thread pools etc. 

Dominic Rübelzahn

unread,
Sep 27, 2016, 2:36:12 AM9/27/16
to vert.x
Hello guys,

first thank you very much, especially Krzysztof for that good explanation. You hit my problem very well. Also your suggestions sounds good so far, I will give the DI one a try.

@Jez: the problem is, that the instances are not identicall. We have multiple different verticles instead of just one. I know that is not what vert.x recommends - but as Krzysztof  already said, a typicall pitfall for vertx beginners.

Greetings,
Dominic

Johannes Schüth

unread,
Sep 27, 2016, 9:43:49 AM9/27/16
to vert.x
Hi,

I have exactly the same usecase. I have around 20 verticles which share a common Router in order to buildup a REST API.

@Dominic
I understand your usecase but what exactly do you mean by:

... we can't use the scaleability of vertx with our currect implementation.

Do you mean that you can't just deploy verticles mutiple times?

I'm also a bit worried about this "antipattern" and thus creates the following wishlist entry https://github.com/vert-x3/issues/issues/138

How would you guys go about dynamic extensibility of a Vert.x REST API? My original plan was to dynamically deploy additional verticles which would extend the REST API by adding new handlers/subrouters. After reading this thread i'm not sure whether this solution is a good choice.

- Johannes

S

unread,
Sep 28, 2016, 5:25:57 PM9/28/16
to vert.x
How about one router verticle sending bus messages to handler verticles? You could handle in the router a parametrized path, then send messages to the endpoint listening to _that_ parameter address - if any :)
Pretty dynamic to me and no need for any runtime de/registration of endpoints...
S

Tim Fox

unread,
Sep 29, 2016, 12:27:50 PM9/29/16
to vert.x
I don't understand why you need 20 verticles.

Why not just have one verticle and 20 classes?

Johannes Schüth

unread,
Sep 29, 2016, 1:00:04 PM9/29/16
to vert.x
Hi Tim,

you are completely right. I _don't_ need that many verticles. I ended up having that many verticles because I assumed:

* I would not do something wrong and not break any design pattern by having that many verticles.
* Deploying additional verticles would be a great way to add additional endpoints from my REST API (e.g. using them as a base for a plugin system)

I started development using Vert.x 2. I wanted to use the module system to add a plugin system to my app. With Vert.x 3 I initially thought that I could just use the Maven Verticle Factory instead ( http://vertx.io/docs/vertx-maven-service-factory/java/ ) since Vert.x 3 no longer supports the module system.

I think I will rewrite my code to use just a single verticle (as suggested) and use dagger to inject all endpoint handlers into that verticle.

Dominic Rübelzahn

unread,
Sep 30, 2016, 3:18:40 AM9/30/16
to vert.x
Hello,

@Tim: what do you mean with having just one verticle and 20 classes? Right now every REST endpoint is an own verticle. How should we implement it in your eyes? As handlers? Or just one REST verticle who is delegating the requests to other classes? Is there somewhere an example of a more complex appilication?

Greetings,
Dominic

Tim Fox

unread,
Sep 30, 2016, 3:48:41 AM9/30/16
to vert.x
AIUI, you want to create a service which provides many endpoints and you don't want to specify the routing for all the endpoints in one class, as probably have different teams handling different endpoints. Is that correct?

Perhaps you could describe your requirements in more detail to make sure I am on the same page here.
Reply all
Reply to author
Forward
0 new messages