--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/6a388a10-fbf3-4e6a-abda-7cc248109505%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
How do I put out thread metrics from the Netty HTTP server? Specifically, the number of threads and the request queue length.
Thanks, Thomas.
How do I put out thread metrics from the Netty HTTP server? Specifically, the number of threads and the request queue length.
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/529bbd75-1912-4a64-81db-c71f1c4f4acb%40googlegroups.com.
vertx_http_server_connections (number of opened connections to the HTTP server) and
(2) vertx_http_server_requests (Number of requests being processed) look interesting for my purpose.
Does the difference i.e. ( (2) - (1) ) give the number of requests queued? That would be a heavily useful metric for me to use to determine if my app server is overloaded i.e. receiving more requests than the number of threads available to process them.
Netty does use a thread pool to serve requests, if that's what your are looking after. Here are the metrics Vert.x can give you about an HTTP server: https://vertx.io/docs/vertx-micrometer-metrics/java/#_http_server
Le dim. 28 avr. 2019 à 14:26, Shiva Ramagopal <tr....@gmail.com> a écrit :
Thanks, Thomas.
How do I put out thread metrics from the Netty HTTP server? Specifically, the number of threads and the request queue length.
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ve...@googlegroups.com.
The metrics(1)vertx_http_server_connections (number of opened connections to the HTTP server) and(2) vertx_http_server_requests (Number of requests being processed)look interesting for my purpose.Does the difference i.e. ( (2) - (1) ) give the number of requests queued?
That would be a heavily useful metric for me to use to determine if my app server is overloaded i.e. receiving more requests than the number of threads available to process them.
--On Monday, 29 April 2019 13:22:22 UTC+5:30, Thomas SEGISMONT wrote:Netty does use a thread pool to serve requests, if that's what your are looking after. Here are the metrics Vert.x can give you about an HTTP server: https://vertx.io/docs/vertx-micrometer-metrics/java/#_http_serverLe dim. 28 avr. 2019 à 14:26, Shiva Ramagopal <tr....@gmail.com> a écrit :Thanks, Thomas.
How do I put out thread metrics from the Netty HTTP server? Specifically, the number of threads and the request queue length.
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ve...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/529bbd75-1912-4a64-81db-c71f1c4f4acb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/c07f4b9e-56c6-44e2-955f-b86d4520c449%40googlegroups.com.
I have two verticles - V1 and V2, in my application and 2 instances of each, all running in the same JVM. V1 receives a HTTP request, logs it and sends a JSON message over the event bus to V2. V2 actually process the request. The reason for doing things this way is to reduce user-perceived latency - V2 can take maybe 50-100 ms to actually complete the processing but V1 returns within 10 ms.
Occasionally I find that the response time of V1 exceeds 200 ms and I'm trying to find the reason for this. My hunch is that a traffic burst causes queuing of requests and that's why I'm interested in the Netty metrics.
Any ideas why this behaviour occurs? If the incoming request rate exceeds the server capacity, they ought to be queued at the Netty end?
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/159cd664-4d45-4918-b5af-757c1a1e5644%40googlegroups.com.