--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I know this is a very old thread, but I have a similar issue and it seems nothing has changed since the start of this topic.
I want to be able to reuse HttpClient connections to avoid the overhead of creating a new connection with each request, as there will be a lot of requests coming in, all targeted to the same host.
This means that I have to set keepalive to true, but as far as I can tell this also seems to imply that pipelining is enabled and that's a problem in my case because the host that I am talking to supports keepalive but not pipelining of requests.
So whenever I enable keepalive, I receive a lot of "400 Bad Request" responses from the host because a previous response was not yet fetched by the client.
Therefore the suggestion from N8 about pooling without pipelining seems a nice way to work around servers that don't support the pipeling of requests.
Santo
On 02/05/14 21:55, sANTo L wrote:
I know this is a very old thread, but I have a similar issue and it seems nothing has changed since the start of this topic.
I want to be able to reuse HttpClient connections to avoid the overhead of creating a new connection with each request, as there will be a lot of requests coming in, all targeted to the same host.
This means that I have to set keepalive to true, but as far as I can tell this also seems to imply that pipelining is enabled and that's a problem in my case because the host that I am talking to supports keepalive but not pipelining of requests.
So whenever I enable keepalive, I receive a lot of "400 Bad Request" responses from the host because a previous response was not yet fetched by the client.
Why not just delay sending the next request until the previous response has been received?
--
Hi Norman,
I think Jorge described it very well indeed.
The client should not pipeline all requests for several reasons (also see RFC2616) , while it is doing that at the moment.
And yes, it is possible to work around this, but it's just a workaround - with several drawbacks for the enduser - and therefore not a real solution.
Similar to saying that a HTTP 1.1 server is defunct when it doesn't support pipelining (see earlier response), I think a client is also kind of defunct when it pipelines all its requests without providing the option to disable it.
regards,
Santo
On Saturday, May 10, 2014 4:57:08 PM UTC+2, Norman Maurer wrote:I’m still not sure I understand the problem / concern… Why not just do what Tim suggested and only send the next request once you received the full response ?
--
Norman Maurer
--
The only difference between enabling pipelining or not is to return the HTTP connection after the request is ended (pipelining) or after the response is received (no pipelining).Enabling pipelining should be the rare case.
According to HTTP1.1 specification (see bug): "Clients SHOULD NOT pipeline requests using non-idempotent methods or non-idempotent sequences of methods", but vertx is pipelining any HTTP method. It also creates a performance penalty when there is a substantial difference in response time in different requests.
I think Tim's proposal would create more complex code to build the HTTP client. I guess that I would have to create a queue of requests to send the next request after receiving a response. Perhaps I would also have to tackle with timeouts (to discard requests from the queue). And I'm not sure that it would work when instances>1. I mean, if we have 10 vertx instances but only 1 maxSocket, I can control the reception of responses for my instance, but the socket is shared by all the instances and we could not avoid sending more than 1 request simultaneously in the same socket.
If you could have a look at the patch, it is very simple and it maintains backwards compatibility to avoid any conflict with previous versions.
On Saturday, May 10, 2014 4:57:08 PM UTC+2, Norman Maurer wrote:I’m still not sure I understand the problem / concern… Why not just do what Tim suggested and only send the next request once you received the full response ?
--
Norman Maurer
--
I'm also concerned with mixing pool and pipelining. We are about to launch to production but this is a big issue. Our platform is basically a proxy to different backend services but we've found that there are some resources that may take 50ms, 10 seconds, or even 10 minutes. Pipelining is really nice when delays are small, but when responses are very slow, then pipeline will kill the average response time.Imagine the scenario where 10 requests of 50ms are mixed with a connection of 10 minutes in the same pipeline. It will reply in the worst case and all the responses will be received in 10 minutes.We could disable keepalive, but without a connection pool, it is inefficient (establishing the connection is not cheap) and it is very easy to exhaust ports (in TIME-WAIT status)
when you try to have a high concurrency (imagine 1000 threads or simultaneous connections continuously opening new connections to our backend server).So, is there any chance to separate pipelining from connection pooling? +1 for this divorce
On Sunday, May 4, 2014 9:42:45 PM UTC+2, sANTo L wrote:
On Saturday, May 3, 2014 9:54:03 AM UTC+2, Tim Fox wrote:On 02/05/14 21:55, sANTo L wrote:
I know this is a very old thread, but I have a similar issue and it seems nothing has changed since the start of this topic.
I want to be able to reuse HttpClient connections to avoid the overhead of creating a new connection with each request, as there will be a lot of requests coming in, all targeted to the same host.
This means that I have to set keepalive to true, but as far as I can tell this also seems to imply that pipelining is enabled and that's a problem in my case because the host that I am talking to supports keepalive but not pipelining of requests.
So whenever I enable keepalive, I receive a lot of "400 Bad Request" responses from the host because a previous response was not yet fetched by the client.
Why not just delay sending the next request until the previous response has been received?
Well, that's the whole point of the discussion: let vert.x keep a pool of connections and decide which request can use what connection.
To quote N8:
"One final thought, it seems useful to use pooling without pipelining. In other words, each request gets it's own connection, but the connections come from a pool so if there is a free one available, that can be used. Any reason pipelining and keep-alive are married? What about separating them out?"
And you answered with:
"+1 That's doable"
So I guess it is doable ;-)
Santo
--
The problem is that I need a connection pool for performance reasons and to avoid exhaustion of port by TIME-WAIT.