Quarkus threadpool configuration

2,731 views
Skip to first unread message

rreddy official

unread,
Apr 26, 2021, 3:46:20 AM4/26/21
to Quarkus Development mailing list
Hi Team,

I am looking for info for below property:

With default specification I get only 200 threads.
I have a requirement to spawn as many threads as possible to handle incoming requests.

What value could I configure for this property to achieve the same ?

Thanks,
Rajesh Reddy

 

Paul Carter-Brown

unread,
Apr 26, 2021, 3:51:34 AM4/26/21
to rreddy13...@gmail.com, Quarkus Development mailing list
"As many threads as possible" depends on your available memory. I can't fathom a JVM having 10's of thousands of threads so if you want to make it unlimited you could just set it as a large number like 10000000 and you will hit some other memory issue prior to getting close to that.

By the sounds of your scenario though, you probably need to look at your design cause if you really need that many threads then you are likely to need that many DB connections as well (assuming you doing some DB work). This sounds like a system that could be refactored into a non-blocking pattern using resteasy reactive, hibernate reactive etc if huge concurrency is likely.


--
You received this message because you are subscribed to the Google Groups "Quarkus Development mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to quarkus-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/quarkus-dev/14433795-bcb6-42d6-8906-0cc6765f4ea2n%40googlegroups.com.

John O'Hara

unread,
Apr 26, 2021, 4:06:33 AM4/26/21
to rreddy13...@gmail.com, Quarkus Development mailing list
Please can you elaborate on the "as many requests as possible to handle incoming requests" requirement?

A thread per request does not scale under heavy concurrent load and will likely cause performance bottlenecks.

--
You received this message because you are subscribed to the Google Groups "Quarkus Development mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to quarkus-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/quarkus-dev/14433795-bcb6-42d6-8906-0cc6765f4ea2n%40googlegroups.com.


--

John O'Hara

Principal Software Engineer, Middleware (Performance)

Red Hat

rreddy official

unread,
Apr 26, 2021, 4:17:05 AM4/26/21
to Quarkus Development mailing list
Hi Paul,

Thank you for the reply and info.
Is it possible to configure in such a way that when it reaches the maximum memory, then the further requests need to queue(without any memory issues) for the other threads to become free.

All I am trying to do is to avoid memory issues at the same time use as many threads as possible for that memory capacity.


In my scenario, we already use non-blocking client https://vertx.io/docs/vertx-web-client/java/ to process the requests.
But we have high traffic of incoming requests.

Best regards,
Rajesh

rreddy official

unread,
Apr 26, 2021, 4:24:19 AM4/26/21
to Quarkus Development mailing list
My use case:

We have a jms message listener and on message we use ManagedExecutor to spawn a thread which will process(will trigger http call) that message and main thread will go ahead and receive another message.

If I am not wrong, this ManagedExecutor uses the thread pool whose maximum size is 200 OR (8 * processors).

 So I would like to increase this limit so that the MangedExecutor can spawn more threads to handle incoming messages.

Paul Carter-Brown

unread,
Apr 26, 2021, 4:46:57 AM4/26/21
to rreddy13...@gmail.com, Quarkus Development mailing list
How many messages/second peak on the JMS topic and what is the latency of the HTTP call that you do for each message? I need to understand what kind of concurrency you are talking about


rreddy official

unread,
Apr 26, 2021, 5:06:30 AM4/26/21
to Quarkus Development mailing list
throughput is 20000/sec. And http latency 10-15msec. 

Paul Carter-Brown

unread,
Apr 26, 2021, 5:29:20 AM4/26/21
to rreddy13...@gmail.com, Quarkus Development mailing list
I'd recommend having a dedicated named executor pool with a thread count equal to your processor count and then doing the HTTP call with the non blocking client and processing the result reactively. The pool should dequeue off the topic directly without a context switch from the listener to the pool. I'm not familiar with the JMS infrastructure you are using and whats possible but see if you can avoid that context switch.

But in essence for the throughput and latency you are talking about, I would not have something happily dequeueing and dumping into an executor pool queue without any form of backpressure. Having the workers do the dequeuing means that the JMS subsystem is correctly being used as the work queue as opposed to a thread pools worker queue. This makes scaling a lot easier as multiple nodes can process messages and messages are not lost if dequeue and sit in an executor queue and the JVM dies. 


rreddy official

unread,
Apr 29, 2021, 1:12:28 AM4/29/21
to Quarkus Development mailing list
Thank you for the valuable inputs. Will analyze.
Reply all
Reply to author
Forward
0 new messages