undertow

235 views
Skip to first unread message

Gustavo Galvan

unread,
Jun 11, 2015, 3:31:06 PM6/11/15
to spar...@googlegroups.com
Hi people,

Somewhere in this group or github I read about using Undertow. Well... I was very curious about embedded undertow, then I tried with another micro-framework (which allows use jetty or undertow). The difference is amazing .... almost obscene.

I leave here the benchmark for a simple "Hello world" (ab -c10 -n10000 -k http://127.0.0.1:8081/)


Jetty:

Server Software:        Jetty(9.2.11.v20150529)
Server Hostname:        127.0.0.1
Server Port:            8081

Document Path:          /
Document Length:        12 bytes

Concurrency Level:      10
Time taken for tests:   0.709 seconds
Complete requests:      10000
Failed requests:        0
Keep-Alive requests:    0
Total transferred:      1410000 bytes
HTML transferred:       120000 bytes
Requests per second:    14100.59 [#/sec] (mean)
Time per request:       0.709 [ms] (mean)
Time per request:       0.071 [ms] (mean, across all concurrent requests)
Transfer rate:          1941.59 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       2
Processing:     0    1   0.6      0       8
Waiting:        0    1   0.5      0       8
Total:          0    1   0.6      1       8
WARNING: The median and mean for the processing time are not within a normal deviation
        These results are probably not that reliable.
WARNING: The median and mean for the waiting time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      1
  95%      1
  98%      2
  99%      3
 100%      8 (longest request)


Undertow:

Server Software:        
Server Hostname:        127.0.0.1
Server Port:            8081

Document Path:          /
Document Length:        12 bytes

Concurrency Level:      10
Time taken for tests:   0.232 seconds
Complete requests:      10000
Failed requests:        0
Keep-Alive requests:    10000
Total transferred:      1510000 bytes
HTML transferred:       120000 bytes
Requests per second:    43022.41 [#/sec] (mean)
Time per request:       0.232 [ms] (mean)
Time per request:       0.023 [ms] (mean, across all concurrent requests)
Transfer rate:          6344.12 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     0    0   0.5      0      15
Waiting:        0    0   0.5      0      15
Total:          0    0   0.5      0      15

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      0
  98%      1
  99%      1
 100%     15 (longest request)




Christian Rivasseau

unread,
Jun 11, 2015, 4:51:21 PM6/11/15
to Gustavo Galvan, spar...@googlegroups.com
Gustavo thanks for sharing,

Could you share the code? Last time I tried to embed spark on undertow I couldn;t figure
it out for some reason.

--
You received this message because you are subscribed to the Google Groups "sparkjava" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sparkjava+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Christian Rivasseau
Co-founder and CTO @ Lefty
+33 6 67 35 26 74

Gustavo Galvan

unread,
Jun 11, 2015, 5:59:42 PM6/11/15
to spar...@googlegroups.com
Hi Christian,

Maybe I wasn't clear, I've used another similar framework (because I can't embedded in spark too). I've benchmarked spark and the another framework (both with Jetty) and speed is the same. It is not my intention to advertise other project here, but for your info. the framework used for benchmark is Pippo.

Christian MICHON

unread,
Jun 11, 2015, 6:21:20 PM6/11/15
to spar...@googlegroups.com
I'm not sure the ab results are meaningful.

- yes undertow is asynchronous
- but jetty puts more info in headers etc... than undertow.

Look at it this way:
- Server Software is empty with undertow, is "Jetty(9.2.11.v20150529)" with jetty
- "hello world" is 50% of string "Jetty(9.2.11.v20150529)"
- so patching jetty for not sending Server Software information would make it ~3 times faster too?

I agree with Christian R: it'd be good to have both code snippets to compare, whether you're using spark/pippo or not.

For what it's worth, jetty without spark is even faster...

Gustavo Galvan

unread,
Jun 11, 2015, 6:43:38 PM6/11/15
to spar...@googlegroups.com
Code is only one because pippo support Undertow out of the box. Just only add the jars in classpath (Jetty or Undertow, and its dependencies), and pippo discover the server to use.


package test;

import ro.pippo.core.Pippo;

public class test01 {

public static void main(String[] args) {
Pippo pippo = new Pippo();
pippo.getServer().getSettings().port(8081);
                pippo.getApplication().GET("/", (routeContext) -> routeContext.send("Hello World!"));
                pippo.start();
}

}

Gustavo Galvan

unread,
Jun 12, 2015, 4:51:20 PM6/12/15
to spar...@googlegroups.com
Based on Christian Michon comment, I made a new benchmark now with payloads (and the recent addition of Tomcat embedded).

Code:

package test;

import java.nio.charset.Charset;
import java.util.Random;

import ro.pippo.core.Pippo;

public class test01 {

 
public static void main(String[] args) {

 
// generate payload
 
byte[] array = new byte[1024*10];
 
new Random().nextBytes(array);
 
String payload = new String(array, Charset.forName("UTF-8"));
 
// pippo stuff

 
Pippo pippo = new Pippo();
  pippo
.getServer().getSettings().port(8081);

        pippo
.getApplication().GET("/", (routeContext) -> routeContext.send("Hello World!\n"+payload));
       
//pippo.getApplication().GET("/", (routeContext) -> routeContext.send("Hello World!"));
        pippo
.start();
 
}
}



Benchmark command: ab -c100 -n100000 -k http://127.0.0.1:8081/

Results (req/s):

Server    | "Hello World!" |  payload (+10kb) |  payload (+5kb) |  payload (+1kb)
---------------------------------------------------------------------------------
Undertow  |    70659.84    |     9192.98      |    27928.34     |   52048.23
Tomcat    |    20315.36    |    10649.05
Jetty     |    19295.64    |    10255.17

Enjoy!


On Thursday, June 11, 2015 at 4:31:06 PM UTC-3, Gustavo Galvan wrote:

Christian MICHON

unread,
Jun 13, 2015, 3:13:10 AM6/13/15
to spar...@googlegroups.com
Thanks it's starting to be more meaningful yet some numbers seem to be missing so no valid conclusion is possible for now.

Thanks also for the pippo snippet.

Gustavo Galvan

unread,
Jun 13, 2015, 11:22:43 AM6/13/15
to spar...@googlegroups.com
I made one more to compare server degradation with different payload (just request/s):

Undertow
--------
             --- concurrency ---
payload      x1     x10     x100
---------------------------------
  1kb     11700   41000    51500
  5kb      6200   22200    25500
 10kb      3200    7500    11300
 25kb      1680    4700     5400
 50kb       937    2850     3190

Jetty
-----
             --- concurrency ---
payload      x1     x10     x100
---------------------------------
  1kb      7500   16400    17600
  5kb      5100   10200    12400
 10kb      3500    8400     9600
 25kb      2000    5400     5900
 50kb      1050    3400     3600


So, my basic conclusion is that undertow is the choice if your average responses are below 10kb. If not, has non sense.
But this is not a negative comment because I have a project with an API service that response average is below 10kb ! I could double the performance (and reduce costs) with Undertow.



On Thursday, June 11, 2015 at 4:31:06 PM UTC-3, Gustavo Galvan wrote:

Gustavo Galvan

unread,
Jun 13, 2015, 6:18:11 PM6/13/15
to spar...@googlegroups.com
UPDATE:

I configured Undertow's buffer size to 32kb (default is 16kb) and performance is up again, even with large data !!
It's a beast !!


On Thursday, June 11, 2015 at 4:31:06 PM UTC-3, Gustavo Galvan wrote:
Reply all
Reply to author
Forward
0 new messages