http vs https performance

689 views
Skip to first unread message

Mynock17

unread,
Nov 7, 2016, 4:20:59 AM11/7/16
to civetweb
While benchmarking my (small breasted) test-server with siege I get this results:

siege -b -t 10S -c 2 http://localhost/test.txt
= 6247 trans/sec

siege -b -t 10S -c 2 https://localhost/test.txt
= 28 trans/sec

So http is over 220 times faster (serving a 12 Byte file) ...
Is this normal?

bel

unread,
Nov 7, 2016, 2:08:59 PM11/7/16
to civetweb


On Monday, November 7, 2016 at 10:20:59 AM UTC+1, Mynock17 wrote:

Is this normal?


It depends.
In some situations HTTP is faster, in others HTTPS is faster (https://www.httpvshttps.com/).

I don't know siege, only curl and wget. What does this test do? Does it open a new connection for every request, or reuse an open connection (keep_alive)?
A https connection requires a complex handshake to initiate, wit many steps, much longer than 12 bytes.



bel

unread,
Nov 7, 2016, 5:34:41 PM11/7/16
to civetweb
I installed the "siege" package, and I can reproduce these values (7200 trans/sec vs 36 trans/sec).
I don't know for sure, but I think it does open a new connection for every request.

I used

    time for i in {1..100} ; do curl -k -s https://localhost/test.txt ; done > /dev/null

and got 1.2 seconds for http and 29.4 seconds for https. For this line, I know it opens a new connection for every request.
I did the same for a 90 kB file and got the same results (1.2 and 29 sec) in the curl test (I used public_server.c from the test directory, but the content does not matter). For an 1 MB file we are still at 1.6 and 30 s, for 100 MB I get 1.7 and 140 s. For a Linux server, it depends also on the -allow_sendfile_call option (default for Linux is yes). If you set it to no, http will be slower for larger files (8.2 s for the 100 MB file), but it does not matter for smaller files (1.2s for the 90 kB file). Sendfile cannot be used for https, only for http, and it's also not available for Windows servers. The "siege" results also for larger files looks similar to this, so probably siege works in a similar way.

Mynock17

unread,
Nov 8, 2016, 3:52:52 AM11/8/16
to civetweb


I installed the "siege" package, and I can reproduce these values (7200 trans/sec vs 36 trans/sec).
I don't know for sure, but I think it does open a new connection for every request.

Yes, siege does open a new connection for every request: For some reason I can't enable keep-alive with siege - maybe I try another tool ...   

Mynock17

unread,
Nov 8, 2016, 4:56:46 AM11/8/16
to civetweb
Now it becomes really strange ...

I installed the wrk benchmark tool and did the same tests (but now with my 2008 Bytes index.html to get a more realistic result).
If I run civetweb with enable_keep_alive set to "yes" I get this results (wrk enables keep-alive as default):

wrk -t2 -c2 -d10s http://localhost/index.html
= 45 Requests/sec

= 45 Requests/sec

But when if force "Connection: Close" I get this:
wrk -t2 -c2 -d10s --header "Connection: Close" http://localhost/index.html
= 4464 Requests/sec

wrk -t2 -c2 -d10s --header "Connection: Close" https://localhost:443/index.html
= 91 Requests/sec

I don't understand this ...

bel

unread,
Nov 10, 2016, 1:11:54 PM11/10/16
to civetweb
I don't know wrk either, so I cannot really help interpreting the result since I would have to know EXACTLY what this tool does.
I am not sure if adding "Connection: close" to the header is sufficient for wrk to know it should disable keep_alive - probably it's just an arbitrary header line that is sent to the server without any interpretation. This could lead to either the server or the client hanging in some TCP shutdown state.
I could tell you what is going on if you post some wireshark trace.

But anyway, what is the real problem? Unknown benchmark tools deliver some unknown results, and a detailed interpretation would need a detailed understanding what the individual benchmark tool is doing in detail. Is the problem to understand benchmark tools? Or is there a real performance problem with the server in a certain situation? Then we need to understand this situation, not the benchmark tool.

A benchmark I use is the 100 and the 1000 image test in the test directory (https://github.com/civetweb/civetweb/tree/master/test). Note: the 1000 images test requires the server to be built with Lua support. If you don't have that, the 100 image test is good enough.



On Tuesday, November 8, 2016 at 10:56:46 AM UTC+1, Mynock17 wrote:
Now it becomes really strange ...

I installed the wrk benchmark tool and did the same tests (but now with my 2008 Bytes index.html to get a more realistic result).
...

Mynock17

unread,
Nov 11, 2016, 4:26:57 AM11/11/16
to civetweb
Here are the results from your 100 image test (I've run them a few times in Chrome and show only the fastest result):

civetweb with keep-alive disabled:
http = 350 ms
https = 1298 ms

civetweb with keep-alive enabled:
http =  878 ms
https = 839 ms

So civetweb (http) is 2,5 x faster without keep-alive.  

bel

unread,
Nov 11, 2016, 2:27:53 PM11/11/16
to civetweb
Did you clear the browser cache before testing, and between every retry?
What were your settings for num_threads and static_files_max_age?
Did you use any specific SSL settings, in particular, ssl_protocol_version and ssl_cipher_list?
Did you use a localhost connection or server and browser connected through a network?
What system do you use (you could use "civetweb -I" to show some system information).

My tests with Firefox, localhost, num_threads=500, static_files_max_age=0, 10 retries each (tests partially running in parallel):

https, keep_alive=no, 25737 ms, 24281 ms, 21478 ms,  4680 ms, 24649 ms, 25260 ms, 25223 ms, 25393 ms,  3548 ms, 13593 ms
http, keep_alive=no, 17764 ms, 18328 ms, 15275 ms, 2247 ms, 19090 ms, 17769 ms, 19059 ms, 19636 ms,  2662 ms, 11984 ms

https, keep_alive=yes, 11929 ms, 11775 ms, 11887 ms, 11718 ms, 11939 ms, 1897 ms, 1957 ms, 1946 ms, 6247 ms, 12440 ms
http, keep_alive=yes, 9263 ms, 8381 ms, 9616 ms, 9451 ms, 10157 ms, 1766 ms, 1763 ms, 1552 ms, 5572 ms, 9530 ms


My tests with Firefox, localhost, num_threads=1, static_files_max_age=0, 10 retries each (tests running serialized,

https, keep_alive=no, 16781 ms, 16396 ms, 19596 ms, 15859 ms, 18450 ms, 16886 ms, 7195 ms, 16272 ms, 17356 ms, 17322 ms
http, keep_alive=no, 12634 ms, 13936 ms,  14910 ms, 14905 ms, 14005 ms, 12276 ms, 3458 ms, 13686 ms, 12812 ms, 14269 ms

https, keep_alive=yes, 585079 ms, 426847 ms, 36858 ms, 28927 ms
http, keep_alive=yes, 583799 ms, 38414 ms, 37429 ms, 36985 ms
... now Windows is doing *something* in the background, so I probably have to repeat it on Linux or test it later.

But in my tests, http was always a little bit faster, and keep alive was faster as well.

Mynock17

unread,
Nov 14, 2016, 3:25:12 AM11/14/16
to civetweb
civetweb -I:

CivetWeb V1.9 - Linux #201610220733 SMP Sat Oct 22 11:35:18 UTC 2016 (4.8.4-040804-generic) - x86_64
Features: Files HTTPS WebSockets
Version: 1.9
Build: Nov 14 2016
gcc: 5.4.0
Data model: i:2/4/8/8, f:4/8/16, c:1/4, p:8, s:8, t:8

civetweb.conf:

listening_ports 80,443s
ssl_certificate ./server.pem
enable_keep_alive yes (resp. no)
num_threads 500
static_file_max_age 0

New results of 100images-test on elementary OS (= Ubuntu 16.04) and Chrome (only best results are shown):

keep-alive disabled:
http: 277ms
https: 1287ms

keep-alive enabled:
http: 796ms
https: 887ms

I've checked the headers (keep-alive resp. "Connection: Close" and max-age are correct set). The files were not cached. 
So keep-alive enabled (http) is slower then "Connection: Close".

Mynock17

unread,
Nov 14, 2016, 4:40:40 AM11/14/16
to civetweb
All tests are running (localhost) on a Acer Apire ES1-131 (Intel N3150, 2GB RAM, 32 GB eMMC). 

bel

unread,
Nov 16, 2016, 6:01:43 PM11/16/16
to civetweb
Now I can reproduce this behavior and will analyze it in the next days.

bel

unread,
Nov 18, 2016, 7:09:35 PM11/18/16
to civetweb
In my tests, with some parameters changed
keep-alive=yes is somewhat slower than keep-alive=no for HTTP and
keep-alive=yes is somewhat faster than keep-alive=no for HTTPS.
Not by a factor of 2, but by ~20%

Parameters:
-static_file_max_age 0 -enable_keep_alive yes -num_threads 10 -request_timeout 500

With:
-tcp_nodelay 1
and a connection not from localhost but from a wifi connection, I get the same speed for HTTP keep-alive yes as for no

Depending on the connection and the test environment, you have to test if changing these parameters will help.

I also found this interresting page:
http://stackoverflow.com/questions/4139379/http-keep-alive-in-the-modern-age

There also seems to be a flaw with most browsers when using keep-alive: They don't close the connection after they loaded all page elements.

I will ha

Mynock17

unread,
Nov 21, 2016, 5:06:15 AM11/21/16
to civetweb
I've made a few tests with two other servers (one is a simple http/s-server based on node.js and the other one the "mod_http_files"-Module from the Prosody-XMPP-Server running with LuaJIT).
With both of them the 100-Image-Test runs 5 times faster then civetweb (all tests with keep-alive enabled).
So is this a "multi-threaded vs. async challenge" or is there a problem with civetwebs keep-alive implementation?

Btw: 
I've tested also the websocket-performance with a little Benchmark (http://www.biox.de/benchmark/ws-benchmark.html) with civetweb and a node.js Websocket-Server (http://www.biox.de/benchmark/ws-server.js - you need https://github.com/websockets/ws to run it).
The benchmark tests the requests per second over a single websocket-connection. The server simple echos the requests.
On localhost civetweb delivers 23 req /s - node.js delivers about 370 req /s ... 
    

bel

unread,
Nov 21, 2016, 5:05:56 PM11/21/16
to civetweb

On Monday, November 21, 2016 at 11:06:15 AM UTC+1, Mynock17 wrote:
So is this a "multi-threaded vs. async challenge" or is there a problem with civetwebs keep-alive implementation?


You are using CivetWeb V1.9, a version that is still under development. So it might be there is a problem in the keep-alive implementation, it might be that it was only temporary as well. Currently, the file handling and connection close handling is changing.
I did not do any performance measurements with V1.9 yet.
The last performance measurements and optimizations were done in V1.7 in comparison to FTP.

I guess I have to do some detailed measurements again, but this will not be possible before December.
Unfortunately, it is not possible to run them fully automatic (in continuous integration tests).

I don't think this is an "async vs. multithreaded" related topic - unless you are running out of available threads, but you certainly have enough according to your config file above.

 
Btw: 
I've tested also the websocket-performance with a little Benchmark (http://www.biox.de/benchmark/ws-benchmark.html) with civetweb and a node.js Websocket-Server (http://www.biox.de/benchmark/ws-server.js - you need https://github.com/websockets/ws to run it).
The benchmark tests the requests per second over a single websocket-connection. The server simple echos the requests.
On localhost civetweb delivers 23 req /s - node.js delivers about 370 req /s ...

I don't know this test, so I cannot say anything without analyzing this in detail.
....
Ok, I just found out how it works.

URI: ws://localhost:8080/echo.lua
No. of requests: 1000
0.547s = 1828.2 req / s




echo.lua

Mynock17

unread,
Nov 22, 2016, 5:59:09 AM11/22/16
to civetweb
You are using CivetWeb V1.9, a version that is still under development. So it might be there is a problem in the keep-alive implementation, it might be that it was only temporary as well. Currently, the file handling and connection close handling is changing.
I did not do any performance measurements with V1.9 yet.
The last performance measurements and optimizations were done in V1.7 in comparison to FTP.

I've tested it again with 1.8: same results.
But I have found something interesting: In both 1.8 and 1.9 it is not possible to set "keep_alive_timeout_ms".
If I add "keep_alive_timeout_ms 500" to civetweb.conf I get: 
...
Loading config file ./civetweb.conf
./civetweb.conf: line 4 is invalid, ignoring it: keep_alive_timeout_ms
...

If I set keep_alive_timeout_ms in an embedded server, the server doesn't respond. Maybe this is the issue? 
 

Ok, I just found out how it works.

URI: ws://localhost:8080/echo.lua
No. of requests: 1000
0.547s = 1828.2 req / s

How did you get this result? I get max. 23 req /s with both 1.8 and 1.9 ... 

bel

unread,
Nov 22, 2016, 2:53:29 PM11/22/16
to civetweb


On Tuesday, November 22, 2016 at 11:59:09 AM UTC+1, Mynock17 wrote:

I've tested it again with 1.8: same results.
But I have found something interesting: In both 1.8 and 1.9 it is not possible to set "keep_alive_timeout_ms".
If I add "keep_alive_timeout_ms 500" to civetweb.conf I get: 
...
Loading config file ./civetweb.conf
./civetweb.conf: line 4 is invalid, ignoring it: keep_alive_timeout_ms
...

keep_alive_timeout_ms did not exist in V1.8 - I added it only recently during the ongoing V1.9 development.
In earlier versions, it used the request_timeout_ms value (default 30000 ms) for the same purpose.
Not it is possible to set these two values independently.
 

If I set keep_alive_timeout_ms in an embedded server, the server doesn't respond. Maybe this is the issue? 


This should not happen. What value did you use?

 
 
Ok, I just found out how it works.

URI: ws://localhost:8080/echo.lua
No. of requests: 1000
0.547s = 1828.2 req / s

How did you get this result? I get max. 23 req /s with both 1.8 and 1.9 ... 

I use a Windows build with Lua and Websocket support. In Linux it would be "make WITH_LUA=1 WITH_WEBSOCKET", in Windows I used the civetweb_lua Visual Studio 2010 on Win XP - I actually have different development machines I use alternating for cross-platform code projects: an old Win XP PC, a new Win10 PC and an Ubuntu PC - and different test machines I don't use for development - every development machine is also used for tests, so the oldest (XP) and the newest (10) Windows version is regularly tested.

Anyway, this result was with a WinXP PC (Dual core Pentium 3 GHz, 2 GB RAM), Firefox 50 at the same PC: http://www.biox.de/benchmark/ws-benchmark.html
The echo.lua above is copied to the test directory, server started with this configuration:

authentication_domain 192.168.0.4  .... my IP address
access_log_file r:\log\access.log
error_log_file r:\log\error.log
enable_keep_alive yes
listening_ports 8080,80,[::]:8080,443s
document_root r:\test
ssl_certificate r:\cert\server.pem
num_threads 10
request_timeout_ms 5000
ssl_ca_file r:\cert\client.pem
ssl_default_verify_paths no
ssl_cipher_list ECDHE-ECDSA-AES256-GCM-SHA384:DES-CBC3-SHA:AES128-SHA:AES128-GCM-SHA256
ssl_protocol_version 3
websocket_timeout_ms 300000
lua_preload_file r:\lib\preload.lua
tcp_nodelay 1
static_file_max_age 0
 

bel

unread,
Nov 22, 2016, 3:09:54 PM11/22/16
to civetweb


On Tuesday, November 22, 2016 at 11:59:09 AM UTC+1, Mynock17 wrote:

How did you get this result? I get max. 23 req /s with both 1.8 and 1.9 ... 

URI: ws://localhost:8080/echo.lua
No. of requests: 1000

0.766s = 1305.5 req / s
0.578s = 1730.1 req / s
0.657s = 1522.1 req / s
0.703s = 1422.5 req / s
0.781s = 1280.4 req / s
0.578s = 1730.1 req / s
0.640s = 1562.5 req / s
0.938s = 1066.1 req / s
1.329s = 752.4 req / s
2.094s = 477.6 req / s
... don't know what the PC did again during the end of the test

Same test from a Linux PC (Firefox) to the Windows XP server by 100 Mit LAN
25 req/s
25 req/s
25 req/s
25 req/s
25 req/s




 

bel

unread,
Nov 22, 2016, 3:47:06 PM11/22/16
to civetweb

Server on Linux, Firefox on Linux - localhost:
stable 25 req/s
(Same as server on Windows, Firefox on Linux LAN connection)

Server on Linux, Firefox on Windows:
5 req/s

But now I get ~150 req/s from Windows to Windows - I got 1500 some minutes earlier - probably I have to restart the PC.


On Tuesday, November 22, 2016 at 9:09:54 PM UTC+1, bel wrote:

 

Mynock17

unread,
Nov 23, 2016, 5:26:46 AM11/23/16
to civetweb
I've finally found it: Its tcp_nodelay ...

100images.html with keep_alive yes:
http: 353 ms with tcp_nodelay 1
http: 875 ms with tcp_nodelay 0
https: 392 ms with tcp_nodelay 1
https: 922 ms with tcp_nodelay 0

100images.html with keep_alive no:
http: 389 ms with tcp_nodelay 1
http: 390 ms with tcp_nodelay 0
https: 1410 ms with tcp_nodelay 1
https: 1477 ms with tcp_nodelay 0

ws-benchmark.html running 100 requests with tcp_nodelay 0:
ws: 4.368s = 22.9 req / s 
wss: 4.363s = 22.9 req / s

ws-benchmark.html running 1000 requests with tcp_nodelay 1:
ws: 0.681s = 1469.0 req / s
wss: 0.396s = 2528.3 req / s

All tests running with Ubuntu 16.04 and Chrome.

Conclusion: If you want to make civetweb fly, then set tcp_nodelay to 1 and enable keep_alive : )

Reply all
Reply to author
Forward
0 new messages