Our experience with Vert.x & some thoughts and questions

792 views
Skip to first unread message

Tolga Tekin

unread,
Oct 23, 2014, 10:19:02 PM10/23/14
to ve...@googlegroups.com
Hey guys, 

Past year or so we have released a couple of servers to production using Vert.x 2.1, and we are extremely happy about it. The TPS and response times we are getting are pretty impressive. We are not seeing any stability issues. 

I wanted to share our experience. 
My team was very experienced with NodeJS, but as a company requirement we had to use pure Java, after doing some research we decided to go with Vert.x because of its similarity with NodeJS as a stack and usage of Java as the main language. And it all worked out nicely. We are easily beating the speeds we were seeing on our previous NodeJS stacks.
 
Most of our services are heavy on NetworkIO, (They do many external calls to DB, some other external REST APIs and Redis)
don't do much CPU processing on the data besides organizing the data and JSON handling, 
and they don't have much memory requirements. 
Speed was one of the biggest requirements because of our large customer base.


Here are some of the approaches we took, (well - I'm noticing some of these are drastically different than approaches people taking on the Vert.x forums.) 
* We use Yoke as our main router and middleware handler. It is very similar to ExpressJS, we got used to it pretty quickly. Can't recommend it enough - awesome library. 
* On our services we always use a single Verticle, and rarely use EventBus (except some tests). Coming from NodeJS mentality we didn't have a need for an actor based solution. (We do use "--instances" option to scale the server to available core count)
* We ported NodeJS's Async libraries to Java. This helped us a lot. We are very used to it from our NodeJS days, so couldn't live without it. We use it pretty much everywhere. 
* We use Vertx's HTTP Client for all internal calls. 
* We wrote our own Redis client on Java using Vert.x's Java TCP client. (Instead of the mod-redis)

The last decision was a bit drastic, at the beginning we did use mod-redis, but we noticed having a separate Verticle and communicating with it through EventBus was slowing down things. Normally the overhead of EventBus is actually not much, but Redis is really fast, and at that speeds EventBus overhead is pretty noticeable.
We also needed to add Redis Sentinel support, which requires a lot of changes on the low end of the Redis client. So we decided to write our own version. 
We are getting about 80% better speeds compared to mod-redis. (As far as we can see, the main slowness of the mod-redis is the Json serialization/deserialization during the EventBus communication - so nothing wrong with Mr. Lopes's awesome library internally)
Since most of our overhead is on Redis, this approach gave us tremendous speed gains. 

I know, with our approach we are losing the polygot nature of the VertX. But since we are Java only, it doesn't affect us. But I feel like we are a bit diverging from many Vert.x developers' approach. 


So my thoughts, and questions:

* For pure java applications, I think libraries (eg: mod-redis) that use verticles actually hurt the speed due to Verticle and EventBus overhead. I wish there was a way for library builders to have an option to give a pure Java API (or other languages), and also an API through EventBus (for polygot language support). Reflection features of the language could be used to discover the API and expose it through EventBus automatically. 

* Vertx module registry is awesome. But it would be even better if there was a way to share language specific modules with the community. For example we would love to share our Async port on Java with the community. It is Java only, and it is just a utility library. It is not a module. But I'm not sure what is the best way to share it with others currently. 
I guess what I'm looking for is something more like npm, where people share utility libraries and everything. IMO this is hurting the Vertx community a bit. 
Well, polygot nature of vertx makes this part a bit hard. 


Anyways, thanks a lot,
- Tolga



Jordan Halterman

unread,
Oct 23, 2014, 11:07:20 PM10/23/14
to ve...@googlegroups.com
Thanks for this! Sounds like some good projects to open source in there ;-)

Couple comments in line...

On Oct 23, 2014, at 7:19 PM, 'Tolga Tekin' via vert.x <ve...@googlegroups.com> wrote:

Hey guys, 

Past year or so we have released a couple of servers to production using Vert.x 2.1, and we are extremely happy about it. The TPS and response times we are getting are pretty impressive. We are not seeing any stability issues. 

I wanted to share our experience. 
My team was very experienced with NodeJS, but as a company requirement we had to use pure Java, after doing some research we decided to go with Vert.x because of its similarity with NodeJS as a stack and usage of Java as the main language. And it all worked out nicely. We are easily beating the speeds we were seeing on our previous NodeJS stacks.
 
Most of our services are heavy on NetworkIO, (They do many external calls to DB, some other external REST APIs and Redis)
don't do much CPU processing on the data besides organizing the data and JSON handling, 
and they don't have much memory requirements. 
Speed was one of the biggest requirements because of our large customer base.


Here are some of the approaches we took, (well - I'm noticing some of these are drastically different than approaches people taking on the Vert.x forums.) 
* We use Yoke as our main router and middleware handler. It is very similar to ExpressJS, we got used to it pretty quickly. Can't recommend it enough - awesome library. 
* On our services we always use a single Verticle, and rarely use EventBus (except some tests). Coming from NodeJS mentality we didn't have a need for an actor based solution. (We do use "--instances" option to scale the server to available core count)
* We ported NodeJS's Async libraries to Java. This helped us a lot. We are very used to it from our NodeJS days, so couldn't live without it. We use it pretty much everywhere. 
I was thinking of doing exactly this! It is definitely something that is missing from Vert.x

* We use Vertx's HTTP Client for all internal calls. 
* We wrote our own Redis client on Java using Vert.x's Java TCP client. (Instead of the mod-redis)

The last decision was a bit drastic, at the beginning we did use mod-redis, but we noticed having a separate Verticle and communicating with it through EventBus was slowing down things. Normally the overhead of EventBus is actually not much, but Redis is really fast, and at that speeds EventBus overhead is pretty noticeable.
We also needed to add Redis Sentinel support, which requires a lot of changes on the low end of the Redis client. So we decided to write our own version. 
We are getting about 80% better speeds compared to mod-redis. (As far as we can see, the main slowness of the mod-redis is the Json serialization/deserialization during the EventBus communication - so nothing wrong with Mr. Lopes's awesome library internally)
Since most of our overhead is on Redis, this approach gave us tremendous speed gains. 
Awesome.


I know, with our approach we are losing the polygot nature of the VertX. But since we are Java only, it doesn't affect us. But I feel like we are a bit diverging from many Vert.x developers' approach. 


So my thoughts, and questions:

* For pure java applications, I think libraries (eg: mod-redis) that use verticles actually hurt the speed due to Verticle and EventBus overhead. I wish there was a way for library builders to have an option to give a pure Java API (or other languages), and also an API through EventBus (for polygot language support). Reflection features of the language could be used to discover the API and expose it through EventBus automatically. 
Have you seen proxies in Vert.x 3? Check out the vert-x3 GitHub organization. For example, the Mongo project, which exposes a proxy over the event bus with the implementation in Java:


* Vertx module registry is awesome. But it would be even better if there was a way to share language specific modules with the community. For example we would love to share our Async port on Java with the community. It is Java only, and it is just a utility library. It is not a module. But I'm not sure what is the best way to share it with others currently. 
I guess what I'm looking for is something more like npm, where people share utility libraries and everything. IMO this is hurting the Vertx community a bit. 
Well, polygot nature of vertx makes this part a bit hard. 
Utility libraries are fine within the context of the module system. They can just be organized as non-runnable modules which can be included in other modules. That said, as you may know the module system is going away - being replaced by build time dependency resolution (e.g. Maven). So, presumably it will be perfectly fine to simply publish such a utility to Maven central (which, of course, goes for Vert.x 2 as well).


Anyways, thanks a lot,
- Tolga



--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Message has been deleted

Tolga Tekin

unread,
Oct 24, 2014, 12:22:01 AM10/24/14
to ve...@googlegroups.com
Jordan,
Great pointer to MongoService project, it looks awesome, it looks like it has a Java API and also a EventBus one. 
I think with some quick work we can release our Redis library the same way. 

Thinking about the Async library, Let me catch up with the Vertx 3.0 work - yes maven is a good solution. I guess we could just release it there. 
The only problem with this approach is, it is very hard to search for Vertx only libraries. I wish there was a better way to search for VertX only libraries including utilities. Something similar to npm website. 

- Tolga

Justin Litchfield

unread,
Oct 24, 2014, 8:29:19 AM10/24/14
to ve...@googlegroups.com
To be clear - looking through the docs: https://github.com/vert-x3/service-proxy and code, it seems like these proxy objects will still have all of the same overhead as sending eventbus messages to the existing libraries - like mod-redis.  It will just be easier to develop against.  Is that correct?

Tim Fox

unread,
Oct 24, 2014, 8:51:11 AM10/24/14
to ve...@googlegroups.com
Great!

Thanks for the post :)

Tim Fox

unread,
Oct 24, 2014, 8:52:51 AM10/24/14
to ve...@googlegroups.com
On 24/10/14 05:20, 'Tolga Tekin' via vert.x wrote:
Jordan,
Great pointer to MongoService project, it looks awesome, it looks like it has a Java API and also a EventBus one.

Indeed - this is a feature of Vert.x 3.0 services that use service-proxies - they can be used either locally (as you do) or remotely over the event bus, but using the exact same api.

I have just finished the implementation (see my recent post) Repo is here: https://github.com/vert-x3/service-proxy


I think with some quick work we can release our Redis library the same way. 

Thinking about the Async library, Let me catch up with the Vertx 3.0 work - yes maven is a good solution. I guess we could just release it there. 
The only problem with this approach is, it is very hard to search for Vertx only libraries. I wish there was a better way to search for VertX only libraries including utilities.

+1

Something similar to npm website. 

- Tolga


On Thursday, October 23, 2014 8:07:20 PM UTC-7, Jordan Halterman (kuujo) wrote:

Tim Fox

unread,
Oct 24, 2014, 9:53:53 AM10/24/14
to ve...@googlegroups.com
On 24/10/14 13:29, Justin Litchfield wrote:
To be clear - looking through the docs: https://github.com/vert-x3/service-proxy and code, it seems like these proxy objects will still have all of the same overhead as sending eventbus messages to the existing libraries - like mod-redis.

Sure, if you're communicating to a remote server over the event bus then you're going to incur the hit of communicating to a remote server over the event bus, but a key point here is that the service can also be used locally, i.e. you instantiate an instance of the service in your current verticle - in this case there is no communication over the event bus so it will be quicker - I think this is what Tolga wants and wasn't available with the Vert.x 2.0 way of "bus modules".

Tolga Tekin

unread,
Oct 24, 2014, 2:58:23 PM10/24/14
to ve...@googlegroups.com
Yes exactly, 
This is a very nice feature of Vertx 3.0 Tim, Kudos. 

- Tolga

Jordan Halterman

unread,
Oct 24, 2014, 6:42:16 PM10/24/14
to ve...@googlegroups.com
Definitely my favorite feature of Vert.x 3 so far. I think it resolves one of the bigger remaining usability challenges from Vert.x 2 for exactly the reasons the OP mentioned :-)
Reply all
Reply to author
Forward
0 new messages