Thanks for all the hints. I finally finished my transfer project from Grizzly/Jersey framework to Vertx3.2 and performed some Stresstests using Gatling:
A Rest-service receives a query
- validate the query using Drools
- forward the query to another RestService which performs a lookup in a Postgres database with 100k entries
- present the results
The test was run on a MacBookPro with 16GB RAM. Each Service was started with -Xmx2g -Xms2g
For a run with 30 user/over 100sec the following i got the following results
Grizzly
> request count 3000 (OK=3000 KO=0 )
> min response time 11 (OK=11 KO=- )
> max response time 457 (OK=457 KO=- )
> mean response time 27 (OK=27 KO=- )
> std deviation 45 (OK=45 KO=- )
> response time 50th percentile 15 (OK=15 KO=- )
> response time 75th percentile 19 (OK=19 KO=- )
> mean requests/sec 29.998 (OK=29.998 KO=- )
---- Response Time Distribution ------------------------------------------------
> t < 800 ms 3000 (100%)
> 800 ms < t < 1200 ms 0 ( 0%)
> t > 1200 ms 0 ( 0%)
> failed 0 ( 0%)
Vertx
> request count 3000 (OK=3000 KO=0 )
> min response time 6 (OK=6 KO=- )
> max response time 373 (OK=373 KO=- )
> mean response time 10 (OK=10 KO=- )
> std deviation 24 (OK=24 KO=- )
> response time 50th percentile 7 (OK=7 KO=- )
> response time 75th percentile 8 (OK=8 KO=- )
> mean requests/sec 30.001 (OK=30.001 KO=- )
---- Response Time Distribution ------------------------------------------------
> t < 800 ms 3000 (100%)
> 800 ms < t < 1200 ms 0 ( 0%)
> t > 1200 ms 0 ( 0%)
> failed 0 ( 0%)
Grizzly is faster and scales better. Whereas my Grizzly implementation gets problems with more than 30 users/sec Vertx can handle 100 users!
Great software, thanks to all developers.
--Ulrich