High latency with httpClient.post()

383 views
Skip to first unread message

Svetlozar Argirov

unread,
Nov 28, 2015, 1:38:41 AM11/28/15
to vert.x
Hi,

I am using vert.x as client for Amazon MWS api, and I noticed something very strange. When I use the library provided by amazon ( based on apache http-client )  I get response times in the 90-120ms range for a signle POST request, but when I am doing the same request using the async vert.x client I never get response below 270ms and usually it is around 300ms. I managed to make a reproducible test case in one file :


import io.vertx.core.AbstractVerticle;
import io.vertx.core.Future;
import io.vertx.core.http.HttpClient;
import io.vertx.core.http.HttpClientOptions;
import io.vertx.core.http.HttpClientRequest;
import io.vertx.core.http.HttpClientResponse;

import org.apache.http.Header;
import org.apache.http.HttpResponse;
import org.apache.http.HttpStatus;
import org.apache.http.NameValuePair;
import org.apache.http.StatusLine;
import org.apache.http.params.BasicHttpParams;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.protocol.BasicHttpContext;
import org.apache.http.params.HttpConnectionParams;
import org.apache.http.conn.ManagedClientConnection;
import org.apache.http.entity.StringEntity;
//import org.apache.http.client.HttpClient;
import org.apache.http.client.entity.UrlEncodedFormEntity;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.message.BasicNameValuePair;
import org.apache.http.protocol.HttpContext;
import org.apache.http.impl.conn.PoolingClientConnectionManager;
import org.w3c.dom.Element;
import java.util.concurrent.TimeUnit;

public class VertxClient extends AbstractVerticle {
 
PoolingClientConnectionManager connectionManager = new PoolingClientConnectionManager();

 
String uri = "https://mws.amazonservices.co.uk/Products/2011-10-01";
 
String postBody = "AWSAccessKeyId=INVALID&Action=GetServiceStatus&Signature=YTQtRA%2FfcMbjnbR%2FnjrFOSJdR5pliS2Em0YFqfEbNZE%3D&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2015-11-28T05%3A40%3A28.733Z&Version=2011-10-01";

 
@Override
 
public void start(Future<Void> fut) {
        connectionManager
.setMaxTotal(50);
        connectionManager
.setDefaultMaxPerRoute(50);
        vertx
       
.createHttpServer()
       
.requestHandler(r -> {

         
HttpClientOptions opts = new HttpClientOptions().setSsl(uri.startsWith("https")).setKeepAlive(true).setMaxPoolSize(50);
         
HttpClient client = vertx.createHttpClient(opts);
         
long startTime = System.currentTimeMillis();
         
if( r.params().contains("async") ) {
            client
.postAbs(uri, response -> {
             
long stopTime = System.currentTimeMillis();
             
long elapsedTime = stopTime - startTime;
              r
.response().putHeader("x-reponse-time", String.valueOf(elapsedTime));
              r
.response().end(String.valueOf( response.statusCode() ) + response.statusMessage());
           
}).end(postBody);
         
} else {
           
BasicHttpParams httpParams = new BasicHttpParams();
           
//HttpConnectionParams.setStaleCheckingEnabled(httpParams, true);
           
//HttpConnectionParams.setTcpNoDelay(httpParams, true);
            org
.apache.http.client.HttpClient httpClient = new DefaultHttpClient(connectionManager, httpParams);
           
HttpContext httpContext = new BasicHttpContext();
           
HttpPost request = new HttpPost(uri);
           
try{
              request
.setEntity(new StringEntity(postBody));
             
HttpResponse response = httpClient.execute(request, httpContext);
             
long stopTime = System.currentTimeMillis();
             
long elapsedTime = stopTime - startTime;
              r
.response().putHeader("x-reponse-time", String.valueOf(elapsedTime));
              r
.response().end(response.getStatusLine().toString());
           
}catch(Exception e){
             
long stopTime = System.currentTimeMillis();
             
long elapsedTime = stopTime - startTime;
              r
.response().putHeader("x-reponse-time", String.valueOf(elapsedTime));
              r
.response().end(e.getMessage());
           
} finally {
              request
.releaseConnection();
           
}
         
}
       
})
       
.listen(8080, result -> {
         
if (result.succeeded()) {
            fut
.complete();
         
} else {
            fut
.fail(result.cause());
         
}
       
});
 
}
}

It is not exactly  the same as my real setup but the problem is still visible . If "async" parameter is present the request will go trough vertx httpClient, and if  not trough apache httpclient.

With async response time is always around 270ms, w/o asyn in about 200ms  ( in real setup sync call finishes 100ms ). I used http-perf, tool similar to apache ab with 100 requests , and the average response time is the same .

Using synchronous apache-httpclient : nperf -c1 -n 100  http://localhost:8080
{ statuses: { '200': 100 },
  min: 190,
  max: 332,
  avg: 199.40999999999997,
  count: 100,
  rate: 5.0057566201131305,
  start: 1448690948179,
  total_time: 19977 }

Using async vert.x httclient : nperf -c1 -n 100  http://localhost:8080?async
  { statuses: { '200': 100 },
    min: 259,
    max: 1404,
    avg: 297.6699999999999,
    count: 100,
    rate: 3.356493135971537,
    start: 1448690985302,
    total_time: 29793 }

Also I tried some other endpoints instead of Amazon and while there is still difference it is smaller but the sync version is always a bit faster.
I suspect it is something related to SSL, because I get different results over plain HTTP, but I have no idea how to find out what is happening as I am using vert.x for the first time.
Any help/clues are very much appreciated :)

Cheers,
Zaro

Chris Jones

unread,
Nov 29, 2015, 6:22:38 AM11/29/15
to vert.x
Hi Zaro,

I haven't tried your code, but I suspect the difference may be related to the way the libraries are handling the SSL keep alive. 

At one point, the Vertx HTTPClient would shut down the SSL connections if there were no connections in a 'waiting' state - so the Keep-Alive would only really function if your queue was constantly busy. If your queues weren't completely saturated, then new requests would always trigger the complete SSL handshake and would the handshake latency hit (which is around 3x - which lines up with your ~100ms for Apache and ~270ms for the Vertx client). It would also explain why you are getting different figures with HTTP (I'm guessing the two would be a lot closer with HTTP).

This looks like it may have been addressed though (https://github.com/eclipse/vert.x/pull/1126) - so check to see if the version of Vert.x you are using has that change. That change appears to be in version 3.1, but wasn't in 3.0. 

I originally ran into this when I was using 3.0 and ended up implementing a worker verticle based on the Apache HTTP client because the handshake latency was a significant issue in the application I was dealing with. I have not tested again with the changes in the pull request listed above though.

...I may be completely wrong, but it's another angle to look at...

Svetlozar Argirov

unread,
Nov 30, 2015, 9:59:16 AM11/30/15
to vert.x
Hi Chris,

I am using 3.1, and I did suspect that the connections are not reused but I have no idea how to confirm that. But I guess I will try to find some way to do it :)

Thanks,
Zaro
Reply all
Reply to author
Forward
0 new messages