Vertex web server runing long blocking task

660 views
Skip to first unread message

andriusk

unread,
Jan 11, 2022, 2:20:57 PM1/11/22
to vert.x
Let's say I have a web server with one endpoint. It does some long computation and returns json object.

Computation takes 1 second and let's assume that requirements tolerate that. 
I know I cannot block event loop so decided to run it inside executeBlocking.

However, during load testing, I am getting very poor results.

I have come across many examples how to run executeBlocking, but never seen any with RoutingContext so I am sure I am missing something and doing it wrong. 

Command:
echo "GET http://localhost:7070/number" | vegeta attack -rate=500 -duration=30s -max-workers=1000 -timeout=10s | vegeta report -type=text

Result:
echo "GET http://localhost:7070/number" | vegeta attack -rate=500 -duration=30s -max-workers=1000 -timeout=10s | vegeta report -type=text
Requests      [total, rate, throughput]         3010, 100.27, 0.22
Duration      [total, attack, wait]             40.022s, 30.02s, 10.001s
Latencies     [min, mean, 50, 90, 95, 99, max]  1.063s, 9.986s, 10.001s, 10.001s, 10.001s, 10.001s, 10.001s
Bytes In      [total, mean]                     360, 0.12
Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           0.30%
Status Codes  [code:count]                      0:3001  200:9  
Error Set:
Get "http://localhost:7070/number": context deadline exceeded (Client.Timeout exceeded while awaiting headers)


Implementation:
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Promise;
import io.vertx.core.json.JsonObject;
import io.vertx.ext.web.Router;

import java.time.LocalDateTime;

public class NumberServerVerticle extends AbstractVerticle {

@Override
public void start(Promise<Void> startPromise) throws Exception {
var router = Router.router(vertx);
router.get("/number").handler(context -> {
vertx.executeBlocking(promise -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
promise.complete();
}, ar -> {
if(ar.succeeded()){
context.end(new JsonObject().put("time", LocalDateTime.now().toString()).toBuffer());
} else {
context.fail(500, ar.cause());
}
});
});

vertx.createHttpServer()
.requestHandler(router)
.listen(7070)
.onSuccess(s -> startPromise.complete())
.onFailure(error -> startPromise.fail(error));
}
}

Julien Ponge

unread,
Jan 11, 2022, 2:45:07 PM1/11/22
to ve...@googlegroups.com
> Computation takes 1 second and let's assume that requirements tolerate that.
> I know I cannot block event loop so decided to run it inside executeBlocking.
>
> However, during load testing, I am getting very poor results.

That’s to be expected.

executeBlocking offloads some task from the event-loop to a worker thread (managed in a thread pool), and you block that thread.

If you stress the HTTP server then the worker thread pool is the inevitable bottleneck 🤷‍♂️

andriusk

unread,
Jan 12, 2022, 2:37:06 PM1/12/22
to vert.x
I found out that each request executeBlocking runs on the same thread vert.x-worker-thread-0 
That's the reason why it is so terribly slow. I would expect it picks a new thread available on the thread pool every time a new request comes.

In comparison, I ran Spring boot app with the same logic and server.tomcat.threads.max=200 it yielded results with slow responses (up to 5 seconds), but with 100% success.  

echo "GET http://localhost:8080/number" | vegeta attack -rate=500 -duration=30s -max-workers=1000 | vegeta report -type=text
Requests      [total, rate, throughput]         6801, 225.83, 193.63
Duration      [total, attack, wait]             35.123s, 30.115s, 5.008s
Latencies     [min, mean, 50, 90, 95, 99, max]  1s, 4.551s, 5.002s, 5.008s, 5.009s, 5.011s, 5.705s
Bytes In      [total, mean]                     272034, 40.00

Bytes Out     [total, mean]                     0, 0.00
Success       [ratio]                           100.00%
Status Codes  [code:count]                      200:6801  
Error Set:

Julien Ponge

unread,
Jan 12, 2022, 3:03:14 PM1/12/22
to ve...@googlegroups.com
You’re comparing Apple and Oranges here. 

Le 12 janv. 2022 à 20:37, andriusk <a.kal...@gmail.com> a écrit :


--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/392c2539-c751-476e-b419-18c3420d22bdn%40googlegroups.com.

andriusk

unread,
Jan 12, 2022, 3:27:22 PM1/12/22
to vert.x
Okay, I am getting there. Instead of executeBlocking on each request, I used blockingHandler with flag ordered set to false. 
Now I can see different threads from the thread pool are being used and the success rate went up to 39.22%

I think the default worker thread pool size is 20, can it be increased?
setting DeploymentOptions().setWorkerPoolSize(200) didn't help, still only 20 threads in pool

Julien Ponge

unread,
Jan 12, 2022, 3:31:16 PM1/12/22
to ve...@googlegroups.com
Have you looked at https://vertx.io/docs/vertx-core/java/#blocking_code ?

Especially:

By default, if executeBlocking is called several times from the same context (e.g. the same verticle instance) then the different executeBlocking are executed serially (i.e. one after another).

If you don’t care about ordering you can call 
executeBlocking
 specifying false as the argument to ordered. In this case any executeBlocking may be executed in parallel on the worker pool.

An alternative way to run blocking code is to use a worker verticle

A worker verticle is always executed with a thread from the worker pool.

By default blocking code is executed on the Vert.x worker pool, configured with 
setWorkerPoolSize
.



andriusk

unread,
Jan 12, 2022, 4:43:55 PM1/12/22
to vert.x
I have looked at it, however, I missed https://vertx.io/docs/vertx-web/java/#_using_blocking_handlers 
 
I use Gradle with default Vertx launcher class so had to add a custom Launcher where I can set pool size:

package com.example.starter;
import io.vertx.core.VertxOptions;

public class Launcher extends io.vertx.core.Launcher {
@Override
public void beforeStartingVertx(VertxOptions options) {
options.setWorkerPoolSize(200);
}

public static void main(String[] args) {
new Launcher().dispatch(args);
}
}


All works well now!

Going back to Apples and Oranges, I ran both apps with thread pool size 200 and they both got 100% success rate
Vertx, however, did a little bit better that Spring Boot on response times :) 

Thanks Julien! 



Reply all
Reply to author
Forward
0 new messages