Monitoring with Datadog

865 views
Skip to first unread message

Shiva Ramagopal

unread,
Apr 25, 2019, 4:44:02 AM4/25/19
to vert.x
Hi,

I'm looking for any pointers to help me get started with monitoring my vertx application with Datadog.

Specifically I'm looking to monitor
- the Netty HTTP request processor at the frontend to see how many worker threads are active/idle/blocked etc
- the RedisClient latencies across different requests
- ES client (HttpClient) latencies

Would appreciate any help from people who have done this before.

Thanks,
Shiva

Thomas SEGISMONT

unread,
Apr 25, 2019, 11:48:55 AM4/25/19
to ve...@googlegroups.com
I would recommend to use Vert.x Micrometer Metrics and configure the datadog backend:


--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/6a388a10-fbf3-4e6a-abda-7cc248109505%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Shiva Ramagopal

unread,
Apr 28, 2019, 8:26:30 AM4/28/19
to vert.x
Thanks, Thomas.

How do I put out thread metrics from the Netty HTTP server? Specifically, the number of threads and the request queue length.

Thomas SEGISMONT

unread,
Apr 29, 2019, 3:52:22 AM4/29/19
to ve...@googlegroups.com
Netty does use a thread pool to serve requests, if that's what your are looking after. Here are the metrics Vert.x can give you about an HTTP server: https://vertx.io/docs/vertx-micrometer-metrics/java/#_http_server

Le dim. 28 avr. 2019 à 14:26, Shiva Ramagopal <tr....@gmail.com> a écrit :
Thanks, Thomas.

How do I put out thread metrics from the Netty HTTP server? Specifically, the number of threads and the request queue length.

--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.

Shiva Ramagopal

unread,
May 3, 2019, 5:41:53 AM5/3/19
to vert.x
The metrics

(1) vertx_http_server_connections (number of opened connections to the HTTP server) and
(2) vertx_http_server_requests (Number of requests being processed) look interesting for my purpose.

Does the difference i.e. ( (2) - (1) ) give the number of requests queued? That would be a heavily useful metric for me to use to determine if my app server is overloaded i.e. receiving more requests than the number of threads available to process them.


On Monday, 29 April 2019 13:22:22 UTC+5:30, Thomas SEGISMONT wrote:
Netty does use a thread pool to serve requests, if that's what your are looking after. Here are the metrics Vert.x can give you about an HTTP server: https://vertx.io/docs/vertx-micrometer-metrics/java/#_http_server

Le dim. 28 avr. 2019 à 14:26, Shiva Ramagopal <tr....@gmail.com> a écrit :
Thanks, Thomas.

How do I put out thread metrics from the Netty HTTP server? Specifically, the number of threads and the request queue length.

--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ve...@googlegroups.com.

Thomas SEGISMONT

unread,
May 3, 2019, 10:50:41 AM5/3/19
to ve...@googlegroups.com
Le ven. 3 mai 2019 à 11:41, Shiva Ramagopal <tr....@gmail.com> a écrit :
The metrics

(1) vertx_http_server_connections (number of opened connections to the HTTP server) and
(2) vertx_http_server_requests (Number of requests being processed) look interesting for my purpose.

Does the difference i.e. ( (2) - (1) ) give the number of requests queued?

No. You could have server request equal to zero for example if all keep alive connections are unused. Or server requests much bigger than connections if you use HTTP/2
 
That would be a heavily useful metric for me to use to determine if my app server is overloaded i.e. receiving more requests than the number of threads available to process them.

Vert.x does not use a thread pool to server HTTP requests so you won't get this kind of metrics. I know some users who simply monitor cpu usage to keep under 60% mean usage.
 


On Monday, 29 April 2019 13:22:22 UTC+5:30, Thomas SEGISMONT wrote:
Netty does use a thread pool to serve requests, if that's what your are looking after. Here are the metrics Vert.x can give you about an HTTP server: https://vertx.io/docs/vertx-micrometer-metrics/java/#_http_server

Le dim. 28 avr. 2019 à 14:26, Shiva Ramagopal <tr....@gmail.com> a écrit :
Thanks, Thomas.

How do I put out thread metrics from the Netty HTTP server? Specifically, the number of threads and the request queue length.

--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ve...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/529bbd75-1912-4a64-81db-c71f1c4f4acb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.

Shiva Ramagopal

unread,
May 4, 2019, 7:39:49 AM5/4/19
to vert.x
Got it. I think I should also explain the performance problem that I'm facing.

I have two verticles - V1 and V2, in my application and 2 instances of each, all running in the same JVM. V1 receives a HTTP request, logs it and sends a JSON message over the event bus to V2. V2 actually process the request. The reason for doing things this way is to reduce user-perceived latency - V2 can take maybe 50-100 ms to actually complete the processing but V1 returns within 10 ms.

Occasionally I find that the response time of V1 exceeds 200 ms and I'm trying to find the reason for this. My hunch is that a traffic burst causes queuing of requests and that's why I'm interested in the Netty metrics.

Any ideas why this behaviour occurs? If the incoming request rate exceeds the server capacity, they ought to be queued at the Netty end?

Julien Viet

unread,
May 5, 2019, 3:42:59 PM5/5/19
to vert.x
are V1 and V2 in the same VM ?
> --
> You received this message because you are subscribed to the Google Groups "vert.x" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
> Visit this group at https://groups.google.com/group/vertx.
> To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/159cd664-4d45-4918-b5af-757c1a1e5644%40googlegroups.com.

Shiva Ramagopal

unread,
May 5, 2019, 9:28:17 PM5/5/19
to vert.x
Yes, There are on the same VM and also same JVM.

Thomas SEGISMONT

unread,
May 6, 2019, 3:40:49 AM5/6/19
to ve...@googlegroups.com
If you send messages over the event bus then you should look at the eventbus metrics, perhaps the handler specific ones: https://vertx.io/docs/vertx-micrometer-metrics/java/#_event_bus

That will tell you if some messages are "pending".

--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
Reply all
Reply to author
Forward
0 new messages