Example of using QBit and Vertx together (more complete)

449 views
Skip to first unread message

Rick Hight

unread,
Sep 30, 2015, 2:42:30 AM9/30/15
to vert.x

Example of combining QBit and Vertx


Let's say we have a service like this:


Sample QBit Service

    @RequestMapping("/hello")
    public static class MyRestService {

        @RequestMapping(value = "/world", method = RequestMethod.POST)
        public String hello(String body) {
            return body;
        }
    }

We want to use a lot of Vertx features, and we decide to embed QBit support inside of a verticle.


Our vertx MyVerticle might look like this:

Vertx Verticle

    public class MyVerticle extends AbstractVerticle {

        private final int port;

        public MyVerticle(int port) {
            this.port = port;
        }

        public void start() {

            try {


                /* Route one call to a vertx handler. */
                final Router router = Router.router(vertx); //Vertx router
                router.route("/svr/rout1/").handler(routingContext -> {
                    HttpServerResponse response = routingContext.response();
                    response.setStatusCode(202);
                    response.end("route1");
                });

                /* Route everything under /hello to QBit http server. */
                final Route qbitRoute = router.route().path("/hello/*");


                /* Vertx HTTP Server. */
                final io.vertx.core.http.HttpServer vertxHttpServer =
                        this.getVertx().createHttpServer();

                /*
                 * Use the VertxHttpServerBuilder which is a special builder for Vertx/Qbit integration.
                 */
                final HttpServer httpServer = VertxHttpServerBuilder.vertxHttpServerBuilder()
                        .setRoute(qbitRoute)
                        .setHttpServer(vertxHttpServer)
                        .setVertx(getVertx())
                        .build();


                /*
                 * Create a new service endpointServer and add MyRestService to it.
                 *  ( You could add a lot more services than just one. )
                 */
                final MyRestService myRestService = new MyRestService();
                final ServiceEndpointServer endpointServer = endpointServerBuilder().setUri("/")
                        .addService(myRestService).setHttpServer(httpServer).build();

                endpointServer.startServer();



                /*
                 * Associate the router as a request handler for the vertxHttpServer.
                 */
                vertxHttpServer.requestHandler(router::accept).listen(port);
            }catch (Exception ex) {
                ex.printStackTrace();
            }
        }

        public void stop() {
        }

    }

Read the comments to see what is going on. It should make sense.

Next we start up the vertx Verticle (perhaps in a main method).

Starting up the Vertx verticle

        myVerticle = new MyVerticle(port);
        vertx = Vertx.vertx();
        vertx.deployVerticle(myVerticle, res -> {
            if (res.succeeded()) {
                System.out.println("Deployment id is: " + res.result());
            } else {
                System.out.println("Deployment failed!");
                res.cause().printStackTrace();
            }
            latch.countDown();
        });

Now do some QBit curl commands :)

        final HttpClient client = HttpClientBuilder.httpClientBuilder()
                     .setHost("localhost").setPort(port).buildAndStart();

        final HttpTextResponse response = client.postJson("/svr/rout1/", "\"hi\"");
        assertEquals(202, response.code());
        assertEquals("route1", response.body());


        final HttpTextResponse response2 = client.postJson("/hello/world", "\"hi\"");
        assertEquals(200, response2.code());
        assertEquals("\"hi\"", response2.body());

The full example is actually one of the integration tests that is part of QBit.

Full example

package io.advantageous.qbit.vertx;

import io.advantageous.qbit.annotation.RequestMapping;
import io.advantageous.qbit.annotation.RequestMethod;
import io.advantageous.qbit.http.client.HttpClient;
import io.advantageous.qbit.http.client.HttpClientBuilder;
import io.advantageous.qbit.http.request.HttpTextResponse;
import io.advantageous.qbit.http.server.HttpServer;
import io.advantageous.qbit.server.ServiceEndpointServer;
import io.advantageous.qbit.util.PortUtils;
import io.advantageous.qbit.vertx.http.VertxHttpServerBuilder;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.core.http.HttpServerResponse;
import io.vertx.ext.web.Route;
import io.vertx.ext.web.Router;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;

import static io.advantageous.qbit.server.EndpointServerBuilder.endpointServerBuilder;
import static org.junit.Assert.assertEquals;

/**
 * Created by rick on 9/29/15.
 */
public class VertxRESTIntegrationTest {

    private Vertx vertx;
    private TestVerticle testVerticle;
    private int port;

    @RequestMapping("/hello")
    public static class TestRestService {

        @RequestMapping(value = "/world", method = RequestMethod.POST)
        public String hello(String body) {
            return body;
        }
    }

    public static class TestVerticle extends AbstractVerticle {

        private final int port;

        public TestVerticle(int port) {
            this.port = port;
        }

        public void start() {

            try {


                /* Route one call to a vertx handler. */
                final Router router = Router.router(vertx); //Vertx router
                router.route("/svr/rout1/").handler(routingContext -> {
                    HttpServerResponse response = routingContext.response();
                    response.setStatusCode(202);
                    response.end("route1");
                });

                /* Route everything under /hello to QBit http server. */
                final Route qbitRoute = router.route().path("/hello/*");


                /* Vertx HTTP Server. */
                final io.vertx.core.http.HttpServer vertxHttpServer =
                        this.getVertx().createHttpServer();

                /*
                 * Use the VertxHttpServerBuilder which is a special builder for Vertx/Qbit integration.
                 */
                final HttpServer httpServer = VertxHttpServerBuilder.vertxHttpServerBuilder()
                        .setRoute(qbitRoute)
                        .setHttpServer(vertxHttpServer)
                        .setVertx(getVertx())
                        .build();


                /*
                 * Create a new service endpointServer.
                 */
                final ServiceEndpointServer endpointServer = endpointServerBuilder().setUri("/")
                        .addService(new TestRestService()).setHttpServer(httpServer).build();

                endpointServer.startServer();



                /*
                 * Associate the router as a request handler for the vertxHttpServer.
                 */
                vertxHttpServer.requestHandler(router::accept).listen(port);
            }catch (Exception ex) {
                ex.printStackTrace();
            }
        }

        public void stop() {
        }

    }

    @Before
    public void setup() throws Exception{


        final CountDownLatch latch = new CountDownLatch(1);
        port = PortUtils.findOpenPortStartAt(9000);
        testVerticle = new TestVerticle(port);
        vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(5));
        vertx.deployVerticle(testVerticle, res -> {
            if (res.succeeded()) {
                System.out.println("Deployment id is: " + res.result());
            } else {
                System.out.println("Deployment failed!");
                res.cause().printStackTrace();
            }
            latch.countDown();
        });


        latch.await(5, TimeUnit.SECONDS);
    }

    @Test
    public void test() {

        final HttpClient client = HttpClientBuilder.httpClientBuilder().setHost("localhost").setPort(port).buildAndStart();
        final HttpTextResponse response = client.postJson("/svr/rout1/", "\"hi\"");
        assertEquals(202, response.code());
        assertEquals("route1", response.body());


        final HttpTextResponse response2 = client.postJson("/hello/world", "\"hi\"");
        assertEquals(200, response2.code());
        assertEquals("\"hi\"", response2.body());

    }


    @After
    public void tearDown() throws Exception {

        final CountDownLatch latch = new CountDownLatch(1);
        vertx.close(res -> {
            if (res.succeeded()) {
                System.out.println("Vertx is closed? " + res.result());
            } else {
                System.out.println("Vertx failed closing");
            }
            latch.countDown();
        });


        latch.await(5, TimeUnit.SECONDS);
        vertx = null;
        testVerticle = null;

    }
}

You can bind direct to the vertxHttpServer, or you can use a router.

Bind qbit to a vertx router

    public static class MyVerticle extends AbstractVerticle {

        private final int port;

        public MyVerticle(int port) {
            this.port = port;
        }

        public void start() {

            try {

                HttpServerOptions options = new HttpServerOptions().setMaxWebsocketFrameSize(1000000);
                options.setPort(port);

                Router router = Router.router(vertx); //Vertx router
                router.route("/svr/rout1/").handler(routingContext -> {
                    HttpServerResponse response = routingContext.response();
                    response.setStatusCode(202);
                    response.end("route1");
                });



                io.vertx.core.http.HttpServer vertxHttpServer =
                        this.getVertx().createHttpServer(options);

                HttpServer httpServer = VertxHttpServerBuilder.vertxHttpServerBuilder()
                        .setRouter(router)//BIND TO THE ROUTER!
                        .setHttpServer(vertxHttpServer)
                        .setVertx(getVertx())
                        .build();
               ...

Bind qbit to a vertx httpServer

    public static class MyVerticle extends AbstractVerticle {

        private final int port;

        public MyVerticle(int port) {
            this.port = port;
        }

        public void start() {

            try {


                HttpServerOptions options = new HttpServerOptions().setMaxWebsocketFrameSize(1000000);
                options.setPort(port);


                io.vertx.core.http.HttpServer vertxHttpServer =
                        this.getVertx().createHttpServer(options);

                HttpServer httpServer = VertxHttpServerBuilder.vertxHttpServerBuilder()
                        .setVertx(getVertx()) 
                        .setHttpServer(vertxHttpServer) //BIND TO VERTX HTTP SERVER DIRECT
                        .build();

           ...

Where do we go from here

QBit has a health system, and a microservices stats collections system. Vertx 3 provided similar support. QBit has an event bus. Vertx has an event bus. There is no reason why QBit can't provide Vertx implementations of its event bus (this is how the QBit event bus started), or for that matter integrate with Vertx's health system or its stats collection system. QBit has its own service discovery system with implementations that talk to DNS, Consul, or just monitor JSON files to be updated (for Chef Push, or Consul, etcd pull model). There is no reason QBit could not provide an implementation of its Service Discovery that worked with Vertx's clustering support. All of the major internal services that QBit provides are extensible with plugins via interfaces. There is plenty of opportunity for more integration of QBit and Vertx.

QBit and Vertx have both evolved to provide more and more support for microservices and there is a lot of synergy between the two libs.

QBit can also play well with Servlets, Spring MVC, Spring Boot, and other lightweight HTTP libs. QBit comes batteries included.


BACKGROUND


This feature allows you to create a service which can also serve up web pages and web resources for an app. Prior QBit has been more focused on just being a REST microservices, i.e., routing HTTP calls and WebSocket messages to Java methods. Rather then reinvent the world. QBit now supports Vertx 3. There is a full example at the bottom of this page on how to combine QBit and Vertx.

The QBit support for Vertx 3 exceeds the support for Vertx 2.

QBit allows REST style support via annotations.

Example of QBit REST style support via annotations.

    @RequestMapping(value = "/todo", method = RequestMethod.DELETE)
    public void remove(final Callback<Boolean> callback, 
                       final @RequestParam("id") String id) {

        Todo remove = todoMap.remove(id);
        callback.accept(remove!=null);

    }

QBit, microservices lib, also provides integration with Consul, a typed event bus (which can be clustered), and really simplifies complex reactive async callback coordination between services, and a lot more. Please read through the QBit overview.

History: QBit at first only ran inside of Vertx2 . Then we decided to (client driven decision) to make it stand alone and we lost the ability to run it embedded (we did not need it for any project on the road map).

Now you can use QBit features and Vertx 3 features via a mix and match model. You do this by setting up routers and/or a route in Vertx 3 to route to an HttpServer in QBit, and this takes about 1 line of code.

This means we can use Vertx 3's chunking, streaming etc. for complex Http Support. As well as use Vertx 3 as a normal HttpServer, but when we want to use REST style, async callbacks we can use QBit for routing REST calls to Java methods. We can access all of the features of Vertx 3. (QBit was originally a Vertx 2 add-on lib, but then we had clients that wanted to run in standalone and clients who wanted to use it with Servlets / Embedded Jetty. This is more coming back home versus a new thing).

You can run QBit standalone and if you do, it uses Vertx 3 like a network lib, or you can run QBit inside of Vertx 3.

We moved this up for two reasons. We were going to start using Vertx support for DNS to read DNS entries for service discovery in a Heroku like environment. It made no sense to invest a lot of time using Vertx 2 API when we were switching to Vertx 3 in the short time. We also had some services that needed to deliver up an SPA (Single Page App), so we had to extend the support for Vertx anyway or add these features to QBit (which it sort of has but not really its focus so we would rather just delegate that to Vertx 3), and it made no sense to do that with Vertx 2.

Also the Vertx 3 environment is a very vibrant one with many shared philosophies to QBit.

Let's cover where the Vertx3 integration and QBit come in.

We added a new class called a VertxHttpServerBuilder (extends HttpServerBuilder), which allows one to build a QBit HTTP server from a vertx object, a vertxHttpServer and optionally from a Vertx router or a Vertx route.

Note that you can pass QBit HttpServerBuilder or a QBit HttpServer to a QBitEndpointServerBuilder to use that builder instead or HttpServer instead of the default.VertxHttpServerBuilder is a QBit HttpServerBuilder so you construct it, associate it with vertx, and then inject it into EndpointServerBuilder. This is how we integrate with the QBit REST/WebSocket support. If you are using QBit REST with Vertx, that is one integration point.

Also note that you can pass HttpServerBuilder or a HttpServer to aManagedServiceBuilder to use that builder instead or HttpServer instead of the default. If you wanted to use QBit REST and QBit Swagger support with Vertx then you would want to use ManagedServiceBuilder with this class.

Read Vertx 3 guide on HTTP routing for more details.

Here are some docs taken from our JavaDocs for QBit VertxHttpServerBuilder.VertxHttpServerBuilder also allows one to pass a shared Vertx object if running inside of the Vertx world. It also allows one to pass a shared vertx HttpServer if you want to use more than just QBit routing. If you are using Vertx routing or you want to limit this QBit HttpServer to one route then you can pass a route.

Note: QBits Vertx 2 support is EOL. We will be phasing it out shortly.

Here are some code examples on how to mix and match QBit and Vertx3.

Usage

Creating a QBit HttpServer that is tied to a single vertx route

    HttpServer httpServer = VertxHttpServerBuilder.vertxHttpServerBuilder()
                    .setVertx(vertx).setHttpServer(vertxHttpServer).setRoute(route).build();
    httpServer.start();

Creating a QBit HttpServer server and passing a router so it can register itself as the default route

    Router router = Router.router(vertx); //Vertx router
    Route route1 = router.route("/some/path/").handler(routingContext -> {
    HttpServerResponse response = routingContext.response();
         // enable chunked responses because we will be adding data as
         // we execute over other handlers. This is only required once and
         // only if several handlers do output.
         response.setChunked(true);
         response.write("route1\n");

         // Call the next matching route after a 5 second delay
        routingContext.vertx().setTimer(5000, tid -> routingContext.next());
    });

    //Now install our QBit Server to handle REST calls.
    vertxHttpServerBuilder = VertxHttpServerBuilder.vertxHttpServerBuilder()
                    .setVertx(vertx).setHttpServer(httpServer).setRouter(router);

    HttpServer httpServer = vertxHttpServerBuilder.build();
    httpServer.start();

Note that you can pass HttpServerBuilder or a HttpServer toEndpointServerBuilder to use that builder instead or HttpServer instead of the default. If you are using QBit REST with Vertx, that is one integration point.

EndpointServerBuilder integration

    //Like before
    vertxHttpServerBuilder = VertxHttpServerBuilder.vertxHttpServerBuilder()
                    .setVertx(vertx).setHttpServer(vertxHttpServer).setRouter(router);

    //Now just inject it into the vertxHttpServerBuilder before you call build
    HttpServer httpServer = vertxHttpServerBuilder.build();
    endpointServerBuilder.setHttpServer(httpServer);

Also note that you can pass HttpServerBuilder or a HttpServer to aManagedServiceBuilder to use that builder instead or HttpServer instead of the default.

If you wanted to use QBit REST and QBit Swagger support with Vertx then you would want to use ManagedServiceBuilder with this class.

ManagedServiceBuilder integration

    //Like before
    vertxHttpServerBuilder = VertxHttpServerBuilder.vertxHttpServerBuilder()
                    .setVertx(vertx).setHttpServer(vertxHttpServer).setRouter(router);

    //Now just inject it into the vertxHttpServerBuilder before you call build
    HttpServer httpServer = vertxHttpServerBuilder.build();
    managedServiceBuilder.setHttpServer(httpServer);

Read Vertx guide on routing for more details Vertx Http Ext Manual.

Example of combining QBit and Vertx

Let's say we have a service like this:

Sample QBit Service

    @RequestMapping("/hello")
    public static class MyRestService {

        @RequestMapping(value = "/world", method = RequestMethod.POST)
        public String hello(String body) {
            return body;
        }
    }

We want to use a lot of Vertx features, and we decide to embed QBit support inside of a verticle.

Our vertx MyVerticle might look like this:

Vertx Verticle

    public class MyVerticle extends AbstractVerticle {

        private final int port;

        public MyVerticle(int port) {
            this.port = port;
        }

        public void start() {

            try {


                /* Route one call to a vertx handler. */
                final Router router = Router.router(vertx); //Vertx router
                router.route("/svr/rout1/").handler(routingContext -> {
                    HttpServerResponse response = routingContext.response();
                    response.setStatusCode(202);
                    response.end("route1");
                });

                /* Route everything under /hello to QBit http server. */
                final Route qbitRoute = router.route().path("/hello/*");


                /* Vertx HTTP Server. */
                final io.vertx.core.http.HttpServer vertxHttpServer =
                        this.getVertx().createHttpServer();

                /*
                 * Use the VertxHttpServerBuilder which is a special builder for Vertx/Qbit integration.
                 */
                final HttpServer httpServer = VertxHttpServerBuilder.vertxHttpServerBuilder()
                        .setRoute(qbitRoute)
                        .setHttpServer(vertxHttpServer)
                        .setVertx(getVertx())
                        .build();


                /*
                 * Create a new service endpointServer and add MyRestService to it.
                 *  ( You could add a lot more services than just one. )
                 */
                final MyRestService myRestService = new MyRestService();
                final ServiceEndpointServer endpointServer = endpointServerBuilder().setUri("/")
                        .addService(myRestService).setHttpServer(httpServer).build();

                endpointServer.startServer();



                /*
                 * Associate the router as a request handler for the vertxHttpServer.
                 */
                vertxHttpServer.requestHandler(router::accept).listen(port);
            }catch (Exception ex) {
                ex.printStackTrace();
            }
        }

        public void stop() {
        }

    }

Read the comments to see what is going on. It should make sense.

Next we start up the vertx Verticle (perhaps in a main method).

Starting up the Vertx verticle

        myVerticle = new MyVerticle(port);
        vertx = Vertx.vertx();
        vertx.deployVerticle(myVerticle, res -> {
            if (res.succeeded()) {
                System.out.println("Deployment id is: " + res.result());
            } else {
                System.out.println("Deployment failed!");
                res.cause().printStackTrace();
            }
            latch.countDown();
        });

Now do some QBit curl commands :)

        final HttpClient client = HttpClientBuilder.httpClientBuilder()
                     .setHost("localhost").setPort(port).buildAndStart();

        final HttpTextResponse response = client.postJson("/svr/rout1/", "\"hi\"");
        assertEquals(202, response.code());
        assertEquals("route1", response.body());


        final HttpTextResponse response2 = client.postJson("/hello/world", "\"hi\"");
        assertEquals(200, response2.code());
        assertEquals("\"hi\"", response2.body());

The full example is actually one of the integration tests that is part of QBit.

Full example

package io.advantageous.qbit.vertx;

import io.advantageous.qbit.annotation.RequestMapping;
import io.advantageous.qbit.annotation.RequestMethod;
import io.advantageous.qbit.http.client.HttpClient;
import io.advantageous.qbit.http.client.HttpClientBuilder;
import io.advantageous.qbit.http.request.HttpTextResponse;
import io.advantageous.qbit.http.server.HttpServer;
import io.advantageous.qbit.server.ServiceEndpointServer;
import io.advantageous.qbit.util.PortUtils;
import io.advantageous.qbit.vertx.http.VertxHttpServerBuilder;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.core.http.HttpServerResponse;
import io.vertx.ext.web.Route;
import io.vertx.ext.web.Router;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;

import static io.advantageous.qbit.server.EndpointServerBuilder.endpointServerBuilder;
import static org.junit.Assert.assertEquals;

/**
 * Created by rick on 9/29/15.
 */
public class VertxRESTIntegrationTest {

    private Vertx vertx;
    private TestVerticle testVerticle;
    private int port;

    @RequestMapping("/hello")
    public static class TestRestService {

        @RequestMapping(value = "/world", method = RequestMethod.POST)
        public String hello(String body) {
            return body;
        }
    }

    public static class TestVerticle extends AbstractVerticle {

        private final int port;

        public TestVerticle(int port) {
            this.port = port;
        }

        public void start() {

            try {


                /* Route one call to a vertx handler. */
                final Router router = Router.router(vertx); //Vertx router
                router.route("/svr/rout1/").handler(routingContext -> {
                    HttpServerResponse response = routingContext.response();
                    response.setStatusCode(202);
                    response.end("route1");
                });

                /* Route everything under /hello to QBit http server. */
                final Route qbitRoute = router.route().path("/hello/*");


                /* Vertx HTTP Server. */
                final io.vertx.core.http.HttpServer vertxHttpServer =
                        this.getVertx().createHttpServer();

                /*
                 * Use the VertxHttpServerBuilder which is a special builder for Vertx/Qbit integration.
                 */
                final HttpServer httpServer = VertxHttpServerBuilder.vertxHttpServerBuilder()
                        .setRoute(qbitRoute)
                        .setHttpServer(vertxHttpServer)
                        .setVertx(getVertx())
                        .build();


                /*
                 * Create a new service endpointServer.
                 */
                final ServiceEndpointServer endpointServer = endpointServerBuilder().setUri("/")
                        .addService(new TestRestService()).setHttpServer(httpServer).build();

                endpointServer.startServer();



                /*
                 * Associate the router as a request handler for the vertxHttpServer.
                 */
                vertxHttpServer.requestHandler(router::accept).listen(port);
            }catch (Exception ex) {
                ex.printStackTrace();
            }
        }

        public void stop() {
        }

    }

    @Before
    public void setup() throws Exception{


        final CountDownLatch latch = new CountDownLatch(1);
        port = PortUtils.findOpenPortStartAt(9000);
        testVerticle = new TestVerticle(port);
        vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(5));
        vertx.deployVerticle(testVerticle, res -> {
            if (res.succeeded()) {
                System.out.println("Deployment id is: " + res.result());
            } else {
                System.out.println("Deployment failed!");
                res.cause().printStackTrace();
            }
            latch.countDown();
        });


        latch.await(5, TimeUnit.SECONDS);
    }

    @Test
    public void test() {

        final HttpClient client = HttpClientBuilder.httpClientBuilder().setHost("localhost").setPort(port).buildAndStart();
        final HttpTextResponse response = client.postJson("/svr/rout1/", "\"hi\"");
        assertEquals(202, response.code());
        assertEquals("route1", response.body());


        final HttpTextResponse response2 = client.postJson("/hello/world", "\"hi\"");
        assertEquals(200, response2.code());
        assertEquals("\"hi\"", response2.body());

    }


    @After
    public void tearDown() throws Exception {

        final CountDownLatch latch = new CountDownLatch(1);
        vertx.close(res -> {
            if (res.succeeded()) {
                System.out.println("Vertx is closed? " + res.result());
            } else {
                System.out.println("Vertx failed closing");
            }
            latch.countDown();
        });


        latch.await(5, TimeUnit.SECONDS);
        vertx = null;
        testVerticle = null;

    }
}

You can bind direct to the vertxHttpServer, or you can use a router.

Bind qbit to a vertx router

    public static class MyVerticle extends AbstractVerticle {

        private final int port;

        public MyVerticle(int port) {
            this.port = port;
        }

        public void start() {

            try {

                HttpServerOptions options = new HttpServerOptions().setMaxWebsocketFrameSize(1000000);
                options.setPort(port);

                Router router = Router.router(vertx); //Vertx router
                router.route("/svr/rout1/").handler(routingContext -> {
                    HttpServerResponse response = routingContext.response();
                    response.setStatusCode(202);
                    response.end("route1");
                });



                io.vertx.core.http.HttpServer vertxHttpServer =
                        this.getVertx().createHttpServer(options);

                HttpServer httpServer = VertxHttpServerBuilder.vertxHttpServerBuilder()
                        .setRouter(router)//BIND TO THE ROUTER!
                        .setHttpServer(vertxHttpServer)
                        .setVertx(getVertx())
                        .build();
               ...

Bind qbit to a vertx httpServer

    public static class MyVerticle extends AbstractVerticle {

        private final int port;

        public MyVerticle(int port) {
            this.port = port;
        }

        public void start() {

            try {


                HttpServerOptions options = new HttpServerOptions().setMaxWebsocketFrameSize(1000000);
                options.setPort(port);


                io.vertx.core.http.HttpServer vertxHttpServer =
                        this.getVertx().createHttpServer(options);

                HttpServer httpServer = VertxHttpServerBuilder.vertxHttpServerBuilder()
                        .setVertx(getVertx()) 
                        .setHttpServer(vertxHttpServer) //BIND TO VERTX HTTP SERVER DIRECT
                        .build();

           ...

Where do we go from here

QBit has a health system, and a microservices stats collections system. Vertx 3 provided similar support. QBit has an event bus. Vertx has an event bus. There is no reason why QBit can't provide Vertx implementations of its event bus (this is how the QBit event bus started), or for that matter integrate with Vertx's health system or its stats collection system. QBit has its own service discovery system with implementations that talk to DNS, Consul, or just monitor JSON files to be updated (for Chef Push, or Consul, etcd pull model). There is no reason QBit could not provide an implementation of its Service Discovery that worked with Vertx's clustering support. All of the major internal services that QBit provides are extensible with plugins via interfaces. There is plenty of opportunity for more integration of QBit and Vertx.

QBit and Vertx have both evolved to provide more and more support for microservices and there is a lot of synergy between the two libs.

QBit can also play well with Servlets, Spring MVC, Spring Boot, and other lightweight HTTP libs. QBit comes batteries included.

QBit Java Micorservices lib tutorials

The Java microservice lib. QBit is a reactive programming lib for building microservices - JSON, HTTP, WebSocket, and REST. QBit uses reactive programming to build elastic REST, and WebSockets based cloud friendly, web services. SOA evolved for mobile and cloud. ServiceDiscovery, Health, reactive StatService, events, Java idiomatic reactive programming for Microservices.

Find more tutorial on QBit.

Reactive ProgrammingJava MicroservicesRick Hightower

High-speed microservices consulting firm and authors of QBit with lots of experience with Vertx - Mammatus Technology

Highly recommended consulting and training firm who specializes in microservices architecture and mobile development that are already very familiar with QBit and Vertx as well as iOS and Android - About Objects

Java Microservices Architecture

Microservice Service Discovery with Consul

Microservices Service Discovery Tutorial with Consul

Reactive Microservices

High Speed Microservices

Java Microservices Consulting

Microservices Training

Reactive Microservices Tutorial, using the Reactor

QBit is mentioned in the Restlet blog

Tim Fox

unread,
Sep 30, 2015, 3:48:57 AM9/30/15
to ve...@googlegroups.com
Hi Rick,

Looks great! Always nice to the see the ecosystem growing with interesting related projects :)

Feel free to add a link to QBit on vertx-awesome. https://github.com/vert-x3/vertx-awesome

Regarding the event bus - we're going to be making the event bus transport properly pluggable in Vert.x 3.2 so different implementations can be plugged in. I know the event bus was one of the factors that got you started with Qbit in the first place, maybe this is useful to you?
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at http://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/bd25675c-22fc-4b7c-940d-909cdf4aacdf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Message has been deleted

Rick Hightower

unread,
Sep 30, 2015, 11:39:25 AM9/30/15
to ve...@googlegroups.com
Alexandru,

compile 'com.github.advantageous:qbit-vertx:0.9.1-RC2'

It is a new feature and has not made it to an official release yet.


It is in a release candidate. (We use a different group for release candidates at the moment.)
We just started supporting Vertx 3 (last week). Before that we ran on top of Vertx 2, but mainly as a lib.

(You could use QBit with Servlets or other http libs. But it was originally developed to work with Vertx.)

I am keeping notes about QBit and Vertx integration here:







On Wed, Sep 30, 2015 at 6:23 AM, Alexandru Ardelean <ardelean.i...@gmail.com> wrote:
Nice work, but I do have a question. Which version do I have to use for qbit in order to have the VeertxHttpServerBuilder? I have tried qbit-vertx:0.7.2 and qbit-vertx:0.9.0.RELEASE with no avail.

Best Regards, Alexandru.

--
You received this message because you are subscribed to a topic in the Google Groups "vert.x" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/vertx/feTpRbnQVAQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to vertx+un...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Rick Hightower
(415) 968-9037
Profile 

Rick Hightower

unread,
Sep 30, 2015, 11:57:32 AM9/30/15
to ve...@googlegroups.com
Thanks Tim,

Yeah.. I think QBit queuing and event bus is at the wrong level. It sends events and method calls in batches and most of the speed comes from that.

It could ride on top of the Vertx event bus (in fact the first implementation did just that). 

I will look into though.

Thanks for your suggestion. 

I am a much better developer having used Vertx. It twisted my brain and then put it back together again. Great stuff!

--Rick

--
You received this message because you are subscribed to a topic in the Google Groups "vert.x" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/vertx/feTpRbnQVAQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to vertx+un...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Alexandru Ardelean

unread,
Sep 30, 2015, 2:04:15 PM9/30/15
to vert.x
Thanks for the tip Rick, I did it so and it worked. 
Great work, lots of features, very useful.
If I may, I could make a small remark though.
I was a little confused about the route parameter, why does it have to be set into the server, since we already have requestMapping, and why does it have the be the same path described in the requestMapping?
Route is set only once which makes available just one Resource at one time, correct? 

Also the event bus migration to origin sounds like a very good idea, as the performance in my tests have greatly decreased using qbit versus using directly vertx bus. I have 2 implementations of a very simple rest get service, with qbit, and without (using vertx event bus and handlers). The difference is big, could be my configuration, but I doubt. I followed your exact example. 

Rick Hightower

unread,
Sep 30, 2015, 3:40:42 PM9/30/15
to ve...@googlegroups.com
inline

On Wed, Sep 30, 2015 at 11:04 AM, Alexandru Ardelean <ardelean.i...@gmail.com> wrote:
Thanks for the tip Rick, I did it so and it worked. 

Thanks. 
Great work, lots of features, very useful.

Thanks.
 
If I may, I could make a small remark though.
I was a little confused about the route parameter, why does it have to be set into the server, since we already have requestMapping, and why does it have the be the same path described in the requestMapping?

It is just an example to show that you could assign qbit one route and then use the other routes and handle other things (resources, etc.) all from Vertx.

You don't have to use `route` at all.

There is a whole section in:


It covers route, router and using vertxHttpServer direct.
 
Route is set only once which makes available just one Resource at one time, correct? 

You don't have to use route. You can pass it a router, a vertxHttpServer or a route. It is up to you.
 

Also the event bus migration to origin sounds like a very good idea, as the performance in my tests have greatly decreased using qbit versus using directly vertx bus.

Your example does not use the qbit event bus at all. 
I need to write a performance tuning guide for qbit.

I am able to get about 85K TPS on OSX and about 150K or so TPS on a fairly untuned Linux desktop machine using wrk.

There could be a performance regression. But more than likely you need to decrease the batch size, and / or the flush rate. 

QBit tries to bundle up many requests (method calls) to the service at a time.
It excels when you are on a heavy loaded system (its first app was a 100 million user app that ran on 10 servers but could run on three).

The idea is bundle up as much requests as possible and shove them all over at once. 

It can do well in a lower traffic app, but you would have to adjust the batch size and flush rate to a much lower level. 
 
I have 2 implementations of a very simple rest get service, with qbit, and without (using vertx event bus and handlers). The difference is big, could be my configuration, but I doubt. I followed your exact example. 

Yep. Try starting 200 to 1000 clients on four to 8 threads using wrk.


The default configuration is tuned to handle a lot of hits in a short amount of time. 

Or.. there could be a perf regression. This is an RC. I need to spend more time doing perf analysis. 

I have several versions of QBit in prod. One handles a tremendous amount of load for about a year before they redid the backend (and front-end) in something else. 

The other one is in production and it does rate limiting on OAuth tokens at a fairly popular web site / mobile app back-end in SF.

But I tuned it and ran perf tests. 

For more options, visit https://groups.google.com/d/optout.

Rick Hightower

unread,
Sep 30, 2015, 3:42:33 PM9/30/15
to ve...@googlegroups.com
Then there are a lot of smaller apps and microservices deployed which have not ramped up load yet to really push QBit. 

Rick Hightower

unread,
Sep 30, 2015, 4:32:01 PM9/30/15
to ve...@googlegroups.com
Ok.. I think this is what you are seeing...

I got it to run around 80K TPS on OSX.

I switched one of the examples over to use QBit vertx 3.


  8 threads and 8 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   228.37ms   63.08ms 360.84ms   74.05%
    Req/Sec     4.26      1.82    10.00     86.07%
  1048 requests in 30.06s, 73.69KB read
Requests/sec:     34.86
Transfer/sec:      2.45KB


That is very low. BUT....

But by default QBit is tuned for throughput. It is tuned for a batch size of 50 and a flush rate of 100ms (doing this from memory so +-50%).

Increase the connection as high as your OS will allow and retry it.

  8 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     5.96ms    1.12ms  44.52ms   75.68%
    Req/Sec     9.98k     0.98k   12.67k    82.79%
  2383644 requests in 30.01s, 163.67MB read
  Socket errors: connect 0, read 459, write 0, timeout 0
Requests/sec:  79429.83
Transfer/sec:      5.45MB


We are hitting around 80K. I think I can get Vertx to around 85K or on this laptop (ping).

We start hitting limitations of running this all on one OS before we run into limitations of QBit/Vertx. 

If you wanted to run something that worked at a lower load, you could reduce the batch size to 5, and reduce the flush rate to 10 ms.

If you had the hardware to support it and you could support getting 200K requests per second or more, then you would want to increase the batch size to 100 and increase the flush rate to 100 ms. 

Future versions of QBit will go into low latency mode when under no load (switch the batch size on the fly). There are ways to do this now. But it requires some hand tuning. 

I have things running in prod where every request has to make an async call to a clustered stats system to see if the app id (OAuth) has exceeded their rate for this second, and the system can handle 50K or more TPS per node. It can send counts and query counts fast enough to do rate limiting across a cluster of machines and deliver perf on each node. The other system I used with QBit, the F5 died before 10 QBit/Vertx servers running the app did. 






For more options, visit https://groups.google.com/d/optout.

Rick Hightower

unread,
Sep 30, 2015, 5:43:47 PM9/30/15
to ve...@googlegroups.com
setup an example where I have two routes...
One goes through qbit and one just goes through vertx.

  8 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.08ms    1.15ms  62.38ms   81.06%
    Req/Sec     9.93k     1.03k   14.79k    80.96%
  2371669 requests in 30.01s, 162.85MB read
  Socket errors: connect 0, read 424, write 0, timeout 0
Requests/sec:  79022.10

Qbit hits two extra queues, and has to be JSON deserialized. 

Here is a simple route using vertx

$ wrk  -d 30s -t 8 -c 500 http://localhost:8888/svr/rout1/
  8 threads and 500 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     5.58ms    1.07ms  44.52ms   81.41%
    Req/Sec    10.59k     1.13k   14.96k    82.58%
  2528846 requests in 30.02s, 173.93MB read
  Socket errors: connect 0, read 461, write 0, timeout 0
Requests/sec:  84251.46



The batching, flushing, etc. buy you something. But it is hard to see it in this simple example. 
What we see is a 5 to 6% overhead for QBit. When services need to call other services and get return values async, etc. that QBit will pay off. Or at least it did for me.

Also if you can push more through the pipe, you will see less difference as the batching works better. (Which may not be possible on this setup, i.e., laptop). 

The best example of this is here:






Alexandru Ardelean

unread,
Oct 1, 2015, 4:35:30 AM10/1/15
to vert.x
Ok, Thanks again for your promptitude.

Yes, I will retry the tests. I have tried with around 100 connections and the performance was like 20 times less TPS than simple vertx. I will let it rise until 500 connections.
My machine is windows OS (ubuntu machine broke this weekend) with 8 core intel xeon 3.4GHz
Anyhow, I am doing performance tests on 256 connections, ramp up 1 minute, then 30 seconds.
 - undertow handlers (similar to vertx) with 12k TPS.
 - undertow servlet - 9k tps with spring boot annotations
 - vertx simple event bus -11.7 k tps
 - qbit -didn't let to finish the test as around 100 connection was less than 500 tps.
 - undertow handlers mongo - 10k
 - vertx mongo - 10k
 - undertow handlers  mysql - 5.5k
 - vertx mysql - 5k

I will continue to play around with optimizations.
Best Regards, Alexandru

Alexandru Ardelean

unread,
Oct 1, 2015, 5:15:50 PM10/1/15
to vert.x
Yes, I redid the tests with qbit and let it fly until 700 connections. You were right, it started showing impressive numbers, 8.5k TPS. 
Very nice, my favorite rest annotations from now on.

Keep it up, Alex.

Rick Hightower

unread,
Oct 1, 2015, 7:47:53 PM10/1/15
to ve...@googlegroups.com
No thank you. Feedback is good. 

I am thinking I need something that does not require me to run so many connections out of the gate to do well on a test. It was designed for this use case (150K plus TPS a second on 50K plus connections), but as I can see now.... It makes it not look so good if you are running with less connections. 

There is another setup which used to be the default where it could detect if the receiver queue thread was not busy, and then send whatever batch size it had then. 

But please try with more connections too. (I read ahead and see that you did.)

It needs to work better on the low end of the connection side. 



For more options, visit https://groups.google.com/d/optout.

Rick Hightower

unread,
Oct 1, 2015, 7:54:11 PM10/1/15
to ve...@googlegroups.com
Thanks Alex.

I am going to try to do better with lower connection counts.
It should be within 5% or so of Vertx bare.

It will never match Vertx bare because QBit is doing method marshaling, reflection calls, JSON parsing and/or serializing, metrics gathering, etc., but it should get there. In short, there is overhead. 

But seems like you got to about 15% (85% of 100%) using QBit in your test. This let's me sleep better. :)

There might be some opportunity for QBit to improve performance. 

And I will work on a better algorithm for flushing. I have one that I stopped using because I better results on the top end (more connections) which was the primary use case for QBit.

I either have to fix it for lower connections or publish a tuning guide or both.



For more options, visit https://groups.google.com/d/optout.

Rick Hightower

unread,
Oct 1, 2015, 7:57:38 PM10/1/15
to ve...@googlegroups.com
I tested QBit in the past against Spring Boot/Undertow and it was about 2x the perf (2x better than Spring Boot and undertow using QBit/Vertx), but I suspect they improved. (This is from memory. I know QBit was faster and I thought it was about 2x) This is the world. Things keep moving. :) 

I probably should learn how to run your perf test. I would expect QBit to be closer to the top. 


For more options, visit https://groups.google.com/d/optout.

Rick Hight

unread,
Oct 12, 2015, 3:37:39 AM10/12/15
to vert.x
Alex (Alexandru Ardelean);

Next release does better with lower connection counts:

40 connections:

$ wrk -t 4 -d 10 -c 40  http://localhost:9999/services/helloservice/hello

Running 10s test @ http://localhost:9999/services/helloservice/hello

  4 threads and 40 connections

  Thread Stats   Avg      Stdev     Max   +/- Stdev

    Latency   533.42us  128.77us   2.29ms   88.93%

    Req/Sec    18.85k     1.66k   22.28k    60.89%

  757677 requests in 10.10s, 83.82MB read

Requests/sec:  75009.08

Transfer/sec:      8.30MB



300 connections

$ wrk -t 4 -d 10 -c 300  http://localhost:9999/services/helloservice/hello

Running 10s test @ http://localhost:9999/services/helloservice/hello

  4 threads and 300 connections

  Thread Stats   Avg      Stdev     Max   +/- Stdev

    Latency     3.59ms  783.57us  20.49ms   75.04%

    Req/Sec    20.63k     1.95k   22.82k    82.25%

  820587 requests in 10.01s, 90.78MB read

  Socket errors: connect 0, read 163, write 0, timeout 0

Requests/sec:  82008.93

Transfer/sec:      9.07MB

Alexandru Ardelean

unread,
Oct 12, 2015, 3:45:02 AM10/12/15
to vert.x
wow, the tweeking is worth it. The numbers look promising.

Thanks.
Reply all
Reply to author
Forward
0 new messages