Perfomance issues

132 views
Skip to first unread message

Eugene Prokopiev

unread,
Jan 12, 2015, 2:30:58 PM1/12/15
to asyncht...@googlegroups.com
Hi,

I tried to compare http clients perfomance on running concurent requests.

My server code - https://gist.github.com/enp/8fca5a4c1db76b45b51c
My client code - https://gist.github.com/enp/0db72d34b4e671113ab3

My output:

OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK
JDKAsyncHttpProvider : 680
OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK
GrizzlyAsyncHttpProvider : 4601
OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK
NettyAsyncHttpProvider : 5366
OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK OK
PlainProvider : 5283


Why JDKAsyncHttpProvider is so faster then others? Maybe my tests are not correct?

Stéphane Landelle

unread,
Jan 12, 2015, 4:17:34 PM1/12/15
to asyncht...@googlegroups.com
Yes, your test is wrong:
  • on contrary to the AHC providers, your PlainProvider (blocking IO) doesn't download the page (you would have to read the InputStream), it just opens the connection
  • your PlainProvider never closes the connection
  • your other providers are never closed
  • you don't create real load (just a few requests), so you're mostly testing start up time
I fixed and tested with a 100kb page with a 200ms latency and got the following results:
  • JDKAsyncHttpProvider : 445
  • NettyAsyncHttpProvider : 423
  • PlainProvider : 616
Then the point of NIO is not to read the response in a blocking way. You have to listen to the events, compose the Futures, etc...


--
You received this message because you are subscribed to the Google Groups "asynchttpclient" group.
To unsubscribe from this group and stop receiving emails from it, send an email to asynchttpclie...@googlegroups.com.
To post to this group, send email to asyncht...@googlegroups.com.
Visit this group at http://groups.google.com/group/asynchttpclient.
For more options, visit https://groups.google.com/d/optout.

Eugene Prokopiev

unread,
Jan 13, 2015, 12:08:17 AM1/13/15
to asyncht...@googlegroups.com
2015-01-13 0:17 GMT+03:00 Stéphane Landelle:

> I fixed and tested with a 100kb page with a 200ms latency and got the
> following results:
> ...

Thanks! Where can I see a fixed version?

--
WBR,
Eugene Prokopiev

Stéphane Landelle

unread,
Jan 13, 2015, 12:27:22 AM1/13/15
to asyncht...@googlegroups.com
In your client gist

Eugene Prokopiev

unread,
Jan 13, 2015, 2:16:08 AM1/13/15
to asyncht...@googlegroups.com
2015-01-13 8:27 GMT+03:00 Stéphane Landelle:

> In your client gist

In revisions - https://gist.github.com/enp/0db72d34b4e671113ab3/revisions
- I see only my last changes. Can you give me a link to revision with
your changes?

Eugene Prokopiev

unread,
Jan 13, 2015, 2:18:32 AM1/13/15
to asyncht...@googlegroups.com
Sorry, I see changes in comment to gist :)

How you local test server with latency=200 looks like?

--
WBR,
Eugene Prokopiev

Stéphane Landelle

unread,
Jan 13, 2015, 3:03:45 AM1/13/15
to asyncht...@googlegroups.com

Eugene Prokopiev

unread,
Jan 13, 2015, 3:51:04 AM1/13/15
to asyncht...@googlegroups.com
What about grizzly? I see many such messages (with
setExecutorService(pool) ir without):

11:46:59 AM org.glassfish.grizzly.nio.SelectorRunner notifyConnectionException
SEVERE: doSelect exception
java.util.concurrent.RejectedExecutionException: Task
org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable@8d07459
rejected from java.util.concurrent.ThreadPoolExecutor@3e043f74[Terminated,
pool size = 0, active threads = 0, queued tasks = 0, completed tasks =
8]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.executeIoEvent(WorkerThreadIOStrategy.java:100)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.executeIoEvent(AbstractIOStrategy.java:89)
at org.glassfish.grizzly.nio.SelectorRunner.iterateKeyEvents(SelectorRunner.java:414)
at org.glassfish.grizzly.nio.SelectorRunner.iterateKeys(SelectorRunner.java:383)
at org.glassfish.grizzly.nio.SelectorRunner.doSelect(SelectorRunner.java:347)
at org.glassfish.grizzly.nio.SelectorRunner.run(SelectorRunner.java:278)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:745)

--
WBR,
Eugene Prokopiev

Stéphane Landelle

unread,
Jan 13, 2015, 3:58:51 AM1/13/15
to asyncht...@googlegroups.com
I also ran into some issue and just removed it.

If you're lucky, one of the Grizzly provider will jump in...

I might offend some people here, but my opinion is that Grizzly is more or less dead and that you shouldn't consider it as an option when starting new developments. Grizzly support will be dropped in AHC 2.0.

Alexey S

unread,
Jan 13, 2015, 4:33:32 AM1/13/15
to asyncht...@googlegroups.com
Which exactly thread-pool implementation are you using?
Grizzly provider uses the passed thread-pool to delegate I/O events processing. If you pass null as a thread-pool - it'll process responses in the I/O thread. So you can try to set null and see if it helps to resolve the problem.

WBR,
Alexey.

Eugene Prokopiev

unread,
Jan 13, 2015, 4:48:05 AM1/13/15
to asyncht...@googlegroups.com
2015-01-13 12:33 GMT+03:00 Alexey S :

> Which exactly thread-pool implementation are you using?

Executors.newFixedThreadPool(25);

> Grizzly provider uses the passed thread-pool to delegate I/O events
> processing. If you pass null as a thread-pool - it'll process responses in
> the I/O thread. So you can try to set null and see if it helps to resolve
> the problem.

No, setExecutorService(null) not helps

--
WBR,
Eugene Prokopiev

Eugene Prokopiev

unread,
Jan 13, 2015, 5:22:20 AM1/13/15
to asyncht...@googlegroups.com
I tried to rewrite my code again -
https://gist.github.com/enp/0db72d34b4e671113ab3 - I applied your
fixes and add body size and current seconds to output. Output is very
strange: time for JDKAsyncHttpProvider and PlainProvider is the same,
but NettyAsyncHttpProvider is much more slow and executes requests by
groups, where count of requests in one group = http server (starman)
workers count. How can NettyAsyncHttpProvider know about server
workers count and why it fails at the end?

Output:

57766:1421143653
57764:1421143653
57766:1421143653
57766:1421143653
57764:1421143653
57766:1421143653
57768:1421143653
57766:1421143653
57766:1421143653
57766:1421143653
57764:1421143653
57766:1421143653
57766:1421143653
57764:1421143653
57766:1421143653
57768:1421143653
57764:1421143653
57766:1421143653
57764:1421143653
57766:1421143653
PlainProvider : 1111
57764:1421143654
57764:1421143654
57764:1421143654
57764:1421143654
57764:1421143654
57764:1421143654
57764:1421143654
57764:1421143654
57764:1421143655
57764:1421143655
57766:1421143655
57766:1421143655
57764:1421143655
57764:1421143655
57764:1421143655
57764:1421143655
57764:1421143655
57764:1421143655
57764:1421143655
57764:1421143655
JDKAsyncHttpProvider : 1178
57764:1421143655
57764:1421143655
57764:1421143655
57764:1421143656
57764:1421143656
57764:1421143656
57764:1421143657
57764:1421143657
57764:1421143657
57764:1421143659
57764:1421143659
57764:1421143659
57764:1421143659
57764:1421143659
57764:1421143659
57764:1421143660
57764:1421143660
57764:1421143660
57768:1421143661
java.net.ConnectException: connection timed out: /10.7.1.13:3000
NettyAsyncHttpProvider : 6413

--
WBR,
Eugene Prokopiev

Alexey S

unread,
Jan 13, 2015, 3:14:55 PM1/13/15
to asyncht...@googlegroups.com
The problem with Grizzly provider you see is probably related to the fact that the thread-pool you pass it is already closed by Netty provider.
Additionally there's a difference how Netty and Grizzly providers treat the application thread pool. Grizzly does the thread switch from I/O thread to app thread, where Netty AFAIU doesn't do the switch, Stephane can correct if I'm wrong. So to make Grizzly work same as Netty without thread context switch you need to apply one more settings like:

        final GrizzlyAsyncHttpProviderConfig grizzlyConfig = new GrizzlyAsyncHttpProviderConfig();
        grizzlyConfig.addProperty(TRANSPORT_CUSTOMIZER, new TransportCustomizer() {
            @Override
            public void customize(TCPNIOTransport transport, FilterChainBuilder builder) {
                transport.setIOStrategy(SameThreadIOStrategy.getInstance());
            }
        });
               
        Provider[] providers = new Provider[]{
...............
            new AsyncProvider(new GrizzlyAsyncHttpProvider(
                    new AsyncHttpClientConfig.Builder()
                            .setAsyncHttpClientProviderConfig(grizzlyConfig)
                            .build()))
               
        };

WBR,
Alexey.

Eugene Prokopiev

unread,
Jan 14, 2015, 12:19:04 AM1/14/15
to asyncht...@googlegroups.com
2015-01-13 23:14 GMT+03:00 Alexey S:

> The problem with Grizzly provider you see is probably related to the fact
> that the thread-pool you pass it is already closed by Netty provider.

Yes, it is impossible to run any provider after NettyProvider :( Is this a bug?

> Additionally there's a difference how Netty and Grizzly providers treat the
> application thread pool ...

Thanks, I tried your code, but I can't see perfomance difference
between NettyProvider and GrizzlyProvider. Both are more slow than
PlainProvider and JDKAsyncHttpProvider on small connection times and
raises IOException: Remotely closed and ConnectException: connection
timed out on times > 20 with or without setExecutorService(pool).
There are no such issues with PlainProvider and JDKAsyncHttpProvider.

--
WBR,
Eugene Prokopiev

Stéphane Landelle

unread,
Jan 14, 2015, 7:27:25 AM1/14/15
to asyncht...@googlegroups.com
2015-01-14 6:19 GMT+01:00 Eugene Prokopiev <e...@itx.ru>:
2015-01-13 23:14 GMT+03:00 Alexey S:

> The problem with Grizzly provider you see is probably related to the fact
> that the thread-pool you pass it is already closed by Netty provider.

Yes, it is impossible to run any provider after NettyProvider :( Is this a bug?


No, it's not. It's the expected behavior. You shouldn't be trying to reuse the ThreadPool passed to the NettyProvider.

 

> Additionally there's a difference how Netty and Grizzly providers treat the
> application thread pool ...

Thanks, I tried your code, but I can't see perfomance difference
between NettyProvider and GrizzlyProvider. Both are more slow than
PlainProvider and  JDKAsyncHttpProvider on small connection times and
raises IOException: Remotely closed and ConnectException: connection
timed out on times > 20 with or without setExecutorService(pool).
There are no such issues with PlainProvider and  JDKAsyncHttpProvider.

I suspect that's the other way around and the issue is with Starman that can't deal with concurrency properly.
Then, once again, with just 20 requests, you're mostly measuring cold start up time.

Eugene Prokopiev

unread,
Jan 14, 2015, 1:55:25 PM1/14/15
to asyncht...@googlegroups.com
2015-01-14 15:27 GMT+03:00 Stéphane Landelle :

> I suspect that's the other way around and the issue is with Starman that
> can't deal with concurrency properly.

Yes, Starman can't work with many concurrent requests, but I need to
work with such http servers correctly.

> Then, once again, with just 20 requests, you're mostly measuring cold start
> up time.

Executing many requests without exceptions is even more interesting
than speed. Why I get IOException: Remotely closed and
ConnectException: connection timed out with NettyProvider and
GrizzlyProvider and no exception with PlainProvider and
JDKAsyncHttpProvider?

--
WBR,
Eugene Prokopiev

Stéphane Landelle

unread,
Jan 14, 2015, 3:34:02 PM1/14/15
to asyncht...@googlegroups.com
Which versions of AHC, Netty and JDK do you use?


--
WBR,
Eugene Prokopiev

Eugene Prokopiev

unread,
Jan 15, 2015, 12:10:35 AM1/15/15
to asyncht...@googlegroups.com
2015-01-14 23:34 GMT+03:00 Stéphane Landelle :

> Which versions of AHC, Netty and JDK do you use?

$ java -version
java version "1.8.0_25"
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)

AHC and Netty are last from maven:

repositories {
mavenCentral()
}

dependencies {
compile 'com.ning:async-http-client:1.9.+'
compile 'org.codehaus.woodstox:woodstox-core-asl:4.4.+'
compile 'org.glassfish.grizzly:grizzly-http-server:2.3.+'
compile 'org.glassfish.grizzly:grizzly-websockets:2.3.+'
compile 'ch.qos.logback:logback-classic:1.1.+'
compile 'org.slf4j:jul-to-slf4j:1.7.+'
compile 'commons-io:commons-io:2.4'
}

gradle dependencies output:

compile - Compile classpath for source set 'main'.
+--- com.ning:async-http-client:1.9.+ -> 1.9.5
| +--- io.netty:netty:3.10.0.Final
| \--- org.slf4j:slf4j-api:1.7.7 -> 1.7.10
+--- org.codehaus.woodstox:woodstox-core-asl:4.4.+ -> 4.4.1
| +--- javax.xml.stream:stax-api:1.0-2
| \--- org.codehaus.woodstox:stax2-api:3.1.4
+--- org.glassfish.grizzly:grizzly-http-server:2.3.+ -> 2.3.18
| \--- org.glassfish.grizzly:grizzly-http:2.3.18
| \--- org.glassfish.grizzly:grizzly-framework:2.3.18
+--- org.glassfish.grizzly:grizzly-websockets:2.3.+ -> 2.3.18
| +--- org.glassfish.grizzly:grizzly-framework:2.3.18
| \--- org.glassfish.grizzly:grizzly-http:2.3.18 (*)
+--- ch.qos.logback:logback-classic:1.1.+ -> 1.1.2
| +--- ch.qos.logback:logback-core:1.1.2
| \--- org.slf4j:slf4j-api:1.7.6 -> 1.7.10
+--- org.slf4j:jul-to-slf4j:1.7.+ -> 1.7.10
| \--- org.slf4j:slf4j-api:1.7.10
\--- commons-io:commons-io:2.4

--
WBR,
Eugene Prokopiev

Excilys

unread,
Jan 15, 2015, 1:24:17 AM1/15/15
to asyncht...@googlegroups.com
Also, which OS please?

Eugene Prokopiev

unread,
Jan 15, 2015, 3:09:36 AM1/15/15
to asyncht...@googlegroups.com
2015-01-15 9:24 GMT+03:00 Excilys <slan...@excilys.com>:

> Also, which OS please?

$ uname -a
Linux thinkpad 3.14.25-std-def-alt1 #1 SMP Sat Nov 22 19:26:21 UTC
2014 x86_64 GNU/Linux

--
WBR,
Eugene Prokopiev

Stéphane Landelle

unread,
Jan 15, 2015, 3:27:51 AM1/15/15
to asyncht...@googlegroups.com
Then, my best guess is that with Netty or Grizzly, you're hammering too much (too many concurrent connection open at the same time) and that Starman can't cope with it.

You should introduce some kind of rate limiter.
For example, you could:
  • store all the "to be executed" requests in a Concurrent queue where you poll from
  • store all the results in a Concurrent queue where you offer
  • only start a few requests (5-10) from the main thread
  • start new requests from either a custom AsyncHandler or a Listener of the ListenableFuture, as long as the request pool is not empty
  • block main thread as long as the results queue doesn't have the expected size (or use a CountdownLatch) 



--
WBR,
Eugene Prokopiev

Reply all
Reply to author
Forward
0 new messages