Can I create more than one http client per Verticle, is there a limit on that?

426 views
Skip to first unread message

Michael Pog

unread,
Apr 6, 2018, 6:21:18 PM4/6/18
to vert.x
I would like to maintain different connection pool size per destination host.
Looks like the only way I can do it is create a separate HTTP Client for each destination  host. Is it possible to create multiple HTTP Clients per host with different configurations? Is there a limit of something I need to watch for when it comes to that?
What would happen if I send a request to the same host:Port from two different WebClients on the same verticle? Would that still work?

Julien Viet

unread,
Apr 7, 2018, 4:06:05 AM4/7/18
to ve...@googlegroups.com
Hi,


On 7 Apr 2018, at 00:21, Michael Pog <smic...@gmail.com> wrote:

I would like to maintain different connection pool size per destination host.
Looks like the only way I can do it is create a separate HTTP Client for each destination  host. Is it possible to create multiple HTTP Clients per host with different configurations? Is there a limit of something I need to watch for when it comes to that?

no there is no limit in Vert.x per se (besides the heap :-) )

I think the hard limit you can reach is the number of open file descriptors, however if you use one http client  per host or a global http client, it is the same.

The HttpClient is actually a Map<Domain, Pool> inside

What would happen if I send a request to the same host:Port from two different WebClients on the same verticle? Would that still work?

yes, there are no conflicts expected


--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
Visit this group at https://groups.google.com/group/vertx.
To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/1162ae5e-d7ee-421c-90d5-9ae3f5b2f7ed%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Michael Pog

unread,
Apr 8, 2018, 2:08:37 AM4/8/18
to vert.x
Is there a way to share a WeClient or a connection pool among threads/verticles? I need a limit my connection pool, and with 8 threads the minimum I can do is 8 then, which is more than I can have.

Julien Viet

unread,
Apr 8, 2018, 3:32:29 AM4/8/18
to ve...@googlegroups.com
Hi,

it's not clear to me what you want to achieve.

The WebClient uses the Vert.x event loop it is on when it need to open a connection and does not create threads or event loops.

So if you share a web client between verticles, then your verticle might reuse a connection previously open (because of pooling) and you will get callbacks on the event loop you won't expect. In addition there is synchronization in the web client that might become contented when used intensively from different threads.

Julien

> On 8 Apr 2018, at 08:08, Michael Pog <smic...@gmail.com> wrote:
>
> Is there a way to share a WeClient or a connection pool among threads/verticles? I need a limit my connection pool, and with 8 threads the minimum I can do is 8 then, which is more than I can have.
>
> --
> You received this message because you are subscribed to the Google Groups "vert.x" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
> Visit this group at https://groups.google.com/group/vertx.
> To view this discussion on the web, visit https://groups.google.com/d/msgid/vertx/3a8336e1-16a5-460e-a6bf-df2a1b5bd1e3%40googlegroups.com.

Michael Pog

unread,
Apr 8, 2018, 1:27:49 PM4/8/18
to vert.x
Julien, 
Thank you for your reply.
I'll explain what problem I'm facing and maybe you may suggest a better approach than what I'm asking.

I have a Web Server with 8 verticles, each verticle has an HTTP server and WebClient.
Each request I receive, I do some transformation and send an outbound request downstream with the WebClient.

The problem I'm facing is this the downstream services are limited in the number of connections they can receive.
So what I would like to achieve is per hostname, have a connection pool of size 1.
However because I have 8 verticles, and each has its own WebClient, it effectively turns my connection pool size per host to 8.

I addition, I'm doing a DNS caching in my application (to avoid not being able to send any requests if the DNS servers are unreachable).
And any target hostname maybe actually resolve to 4 IP addresses.
So then effectively, my connection pool per host turns into 32, instead of the 1. I wanted.

So what I'm really looking for is:
1. A way to keep the threading model the way it is (it's definitely very efficient), but to be able to share a connection pool among WebClients/ Verticles.
2. A way to pass WebClient both a hostname and IP address I want it to use. So that the number of connections would be per host and not per IP address, but still use the IP I provided and not do a DNS resolution itself / take the IP address from Netty's DNS cache.

What can I do to achieve those goals?

Thanks

Julien Viet

unread,
Apr 8, 2018, 2:13:57 PM4/8/18
to ve...@googlegroups.com
have you tried to use the event bus and split your HttpServer and HttpClient in two verticles so your can scale them independently ?

Michael Pog

unread,
Apr 8, 2018, 6:47:39 PM4/8/18
to vert.x
Hi Julien
Yes I have.
That does not scale. 
The event bus and the communication between threads adds extra overhead and latency, and it does not perform well.
Also A single request to my web server, can be transformed into 40-50 outbound requests. So having more HTTP server verticles than HTTP client ones, makes no sense at all.

It would be great if at least WebClient could take  InetAddress instead of hostname. (Just like AsynHttpClient does), or something like that which would allow pssing both IP address and hostname, and WebClient would maintain a pool per host and not per IP, but still use the provided IP address. I think at least that is doable and does not add any contention/overhead.
Having a shared connection pool among multiple Weclients, that indeed may be tricky, but ASAP AsyncHttpClient does have that and it performs relatively fine. At least having that option would be nice.

Julien Viet

unread,
Apr 9, 2018, 2:09:08 AM4/9/18
to ve...@googlegroups.com

On 9 Apr 2018, at 00:47, Michael Pog <smic...@gmail.com> wrote:

Hi Julien
Yes I have.
That does not scale. 
The event bus and the communication between threads adds extra overhead and latency, and it does not perform well.

it's expected to add a bit of latency.

have you tried then instead to use event loop to event loop communication with a simple runOnContext ?

it would be very similar except that you bypass the event bus routing / copying and basically sharing an HttpClient between event loops does the same.

Also A single request to my web server, can be transformed into 40-50 outbound requests. So having more HTTP server verticles than HTTP client ones, makes no sense at all.

It would be great if at least WebClient could take  InetAddress instead of hostname. (Just like AsynHttpClient does), or something like that which would allow pssing both IP address and hostname, and WebClient would maintain a pool per host and not per IP, but still use the provided IP address. I think at least that is doable and does not add any contention/overhead.

currently there is a pool per host (and port), I don't get the difference, can you elaborate ?

Having a shared connection pool among multiple Weclients, that indeed may be tricky, but ASAP AsyncHttpClient does have that and it performs relatively fine. At least having that option would be nice.

AsyncHttpClient channel pool internal state is simpler than Vert.x because it does not implement max connection per host and does not take in account HTTP/2.

javadevmtl

unread,
Apr 9, 2018, 11:38:27 AM4/9/18
to vert.x
Hi Michael, you can implement a simple pool on top just like I did...  If you want to control it.

In most cases I use the standard 1 client per verticle instance like most people would. But in this particular case I decided to create a pool on top of what's already there so I can control it a bit more...

Also the event bus way can work also. If the event bus is local the perf is good. You also say this approach doesn't scale but your downstream API can only handle 1 connection so, the slight overhead shouldn't matter.

javadevmtl

unread,
Apr 9, 2018, 11:51:51 AM4/9/18
to vert.x
Here is what I do per business request...

I'm using: compile 'net.jodah:expiringmap:0.5.8'
Which has a nice TTL feature to remove the connection off the map after a certain time. It also has a handler which is nice. It allows to close the connection before removing it from the map.

Instantiate the TTL map 1 per verticle instance.
Map<String, WebClient> clientPool = ExpiringMap.builder()
.maxSize(webhookConnectionMaximumPoolSize)
.expiration(webhookConnectionPoolTtlSec, TimeUnit.SECONDS)
.<String, WebClient>asyncExpirationListener((k, c) -> c.close())
.build();

Per business request...
String urlAsKey = webhookBody.getString("domain") + ":" + webhookBody.getInteger("port") + webhookBody.getString("uri");

WebClient client = clientPool.get(urlAsKey);

if (client == null) {

WebClientOptions clientOptions = new WebClientOptions();
clientOptions.setSsl(webhookBody.getBoolean("ssl"));
clientOptions.setTrustAll(webhookBody.getBoolean("trustAll"));

client = WebClient.create(vertx, clientOptions);

clientPool.put(urlAsKey, client);
}

client
.post(...)
.as(BodyCodec.jsonObject())...

dfdf

Michael Pog

unread,
Apr 9, 2018, 4:19:19 PM4/9/18
to vert.x
Julien and javadevmtl,
Thanks for your reply and suggestion.

Julien let me give you a few clarifications first.
"currently there is a pool per host (and port), I don't get the difference, can you elaborate ?"

Yes.
Actually when I started responding to you  I realized that what I was asking for is not possible. 
But host : Port is different from IP : Port. as a single hostname can resolve to multiple IP addresses.

When it comes to my experimentation with Event Bus, I implemented my own Codec which made sure no copying was done and instead I only passed references, so I'm guessing it was doing pretty much the same thing as runOnContext, but I did not know about the runOnContext I will play with that.

javadevmtl
Thank you for the code snippet and the explanation. 
I can definitely in this case have a separate WebClient for a different host, 
but unless I go down the route of using runOnContext, which would imply getting a request in my HTTP server one thread1, passing it to thread 2 using runOnContext since thread 2 has the WebClient for the given hostname, and then when the response comes back on thread 2, using runOnContext again to pass it to thread 1, or have to deal with synchronization issues. 
I haven't tried it but I think it will kill my performance, it's a latency sensitive app.

Thanks for the suggestion though! I appreciate it.

Michael Pog

unread,
Apr 9, 2018, 6:17:42 PM4/9/18
to vert.x
In fact Julien,
How does Vert.x handle the situation where one hostname resolves to 4 IP addresses?
If I understand the code correctly, it seems like the connection pool is based of of peer hostname, but if it has 4 IP address, and the connection pool size is 1. Would you mean that only 1/4 requests would actually go out and the rest would be queued forever? 

Julien Viet

unread,
Apr 10, 2018, 2:02:44 AM4/10/18
to ve...@googlegroups.com
it takes the first available, it could be improved to perform round robin but in that case, the maxSize per host would remain the same

yes, the issue is that you don't know how many servers are available until you make a lookup (so it's a kind of dynamic max size).

Julien Viet

unread,
Apr 10, 2018, 2:03:51 AM4/10/18
to ve...@googlegroups.com
keep in mind that if you are sharing an HttpClient among verticles, then this context switch will happen under the hood (and will likely less efficient than if you do it).



Reply all
Reply to author
Forward
0 new messages