I've been experimenting with the ssl_certificate_by_lua directive and the latest official OpenResty release to dynamically control the SSL handshake process. Everything works beautifully and as expected, however, I've been roughly measuring a 13% to 15% of a slowdown in terms of "SSL negotiation per second" if compared to a "static" NGINX vhost configuration where the SSL handshake is, let's say, performed "natively" without the ssl_certificate_by_lua* directive in place. Let me explain further.
The test is performed on the same system (Debian Jessie) and OpenResty from the official deb packages using a recent distribution of the siege CLI benchmarking tool which supports SNI - therefore there is no SSL session cache involved and the benchmarked "aspect" is "how many SSL handshakes the configuration/setup can pull". I'm aware that the actual SSL handshake is a rather expensive operation (and most of the CPU cycles spent, is in OpenSSL), however, I'm puzzled to see that there is such a measurable difference on the same system, with the same version of OpenResty/NGINX/OpenSSL between setups that use (or not) the ssl_certificate_by_lua directive.
The code in the
ssl_certificate_by_lua that I've used to benchmark was very simple, almost identical with the "Synopsis" code listing found at the official
ngx.ssl documentation - the only difference is that the actual cert is read in the init_by_lua* hook and cached in an LUA table for simplicity.
Are you experiencing the same results in your specific use cases as the above or do you believe I'm missing something?
Cheers!
Filip