siege -b -t 10S -c 2 http://localhost/test.txt
= 6247 trans/sec
siege -b -t 10S -c 2 https://localhost/test.txt
= 28 trans/sec
So http is over 220 times faster (serving a 12 Byte file) ...
Is this normal?
Is this normal?
I installed the "siege" package, and I can reproduce these values (7200 trans/sec vs 36 trans/sec).
I don't know for sure, but I think it does open a new connection for every request.
Now it becomes really strange ...I installed the wrk benchmark tool and did the same tests (but now with my 2008 Bytes index.html to get a more realistic result).
...
So is this a "multi-threaded vs. async challenge" or is there a problem with civetwebs keep-alive implementation?
Btw:I've tested also the websocket-performance with a little Benchmark (http://www.biox.de/benchmark/ws-benchmark.html) with civetweb and a node.js Websocket-Server (http://www.biox.de/benchmark/ws-server.js - you need https://github.com/websockets/ws to run it).The benchmark tests the requests per second over a single websocket-connection. The server simple echos the requests.On localhost civetweb delivers 23 req /s - node.js delivers about 370 req /s ...
You are using CivetWeb V1.9, a version that is still under development. So it might be there is a problem in the keep-alive implementation, it might be that it was only temporary as well. Currently, the file handling and connection close handling is changing.
I did not do any performance measurements with V1.9 yet.
The last performance measurements and optimizations were done in V1.7 in comparison to FTP.
Ok, I just found out how it works.
URI: ws://localhost:8080/echo.lua
No. of requests: 1000
0.547s = 1828.2 req / s
I've tested it again with 1.8: same results.But I have found something interesting: In both 1.8 and 1.9 it is not possible to set "keep_alive_timeout_ms".If I add "keep_alive_timeout_ms 500" to civetweb.conf I get:...Loading config file ./civetweb.conf./civetweb.conf: line 4 is invalid, ignoring it: keep_alive_timeout_ms...
If I set keep_alive_timeout_ms in an embedded server, the server doesn't respond. Maybe this is the issue?
Ok, I just found out how it works.
URI: ws://localhost:8080/echo.lua
No. of requests: 1000
0.547s = 1828.2 req / sHow did you get this result? I get max. 23 req /s with both 1.8 and 1.9 ...
How did you get this result? I get max. 23 req /s with both 1.8 and 1.9 ...