Sizing Up Post-Quantum Signatures for the Web

602 views
Skip to first unread message

Bas Westerbaan

unread,
Oct 31, 2021, 1:19:43 PMOct 31
to 'Dan Brown' via pqc-forum
TLDR

1. In a typical TLS handshake on the Web there are at least six signatures and two public keys.
2. Even if they fit in the congestion window, with larger signatures, we see a double-digit percentage slowdown which can be a hard sell for browser vendors and content servers to adopt.
3. Because of the lead time, we shouldn’t wait too long to adopt PQ signatures on the web. It would be ideal if these six signatures and two public keys would fit together within 9kB.


Dear forum,

Nowadays, around 90% of all web-browsing is protected using TLS (“https”). [0] To make our browsing post-quantum secure, we need to update the key exchange and signatures in TLS. [1] For the former, with respect to performance, the majority of the KEM finalists will do. This is great, as without it, traffic encrypted today could be decrypted by a quantum computer in the future.

There is less urgency for post-quantum signatures in TLS. Only when there is a sufficiently large and stable computer, do we need to have updated the signatures. However, we expect the lead-time to be much higher: more parties and infrastructure involved. Also, none of the finalists, alternates, or even upcoming schemes we’re aware of fit the bill perfectly on their own.

The cause is that there are typically six signatures in a typical handshake with different requirements.

1. Online. The handshake signature is the only one that is created on every handshake.
2. Offline. The other signatures are created in advance.
a. With public key. The certificate chain contains public keys.
b. Without public key. The public keys for the two SCTs (for certificate transparency) and the OCSP staple are not included in the handshake.

There are several proposals to reduce the number of signatures, for instance [2,3]. We think changing protocols to suit the performance characteristics of PQ-crypto is the smart move. However, such changes could take years to adopt. Thus we need to understand the performance of plain TLS with drop-in PQ signatures.

The different variables and constraints make it a challenging puzzle. The best thing is to just try the options. There have been some nice papers measuring or simulating PQ TLS [4,5,6,7]. For TLS as used on the Web, they do not give us a complete picture:

1. The SCT and OCSP staples aren’t considered. Leaving half (three) the signatures out changes results considerably.
2. The networks tested or emulated offer insights, but are far from representative of real-world conditions. Either tests were conducted between two datacenters (which does not include real-world last-mile conditions such as Wi-Fi or spotty mobile connections); or a network was simulated with unrealistic packet loss behavior.

Here Cloudflare can contribute. Setting up an experiment with a modified browser is quite involved, especially with all the possible variations. Instead, as a first step, we decided to measure the most striking variable: the size.

To simulate larger signatures on an unmodified client, we pad the certificate chain with 1kB dummy certificates. We found a small fraction of clients or middleboxes had issues with them. Hence, for our experiment, we launched background requests on a small fraction of challenge pages to a page configured with these dummy certificates. For each connection we measured the handshake time. The graph and additional details can be found in the attached excerpt of an upcoming [9] blog post.

We see two effects. First, every kilobyte added requires a bit more time, due to limited bandwidth and possible physical-layer retransmissions. Secondly, when we fill our congestion window, we need to wait an extra RTT.

In previous discussions on the topic, often only the second effect is considered. A commonly heard solution is to increase the initial congestion window. In our experiment we used an initial congestion window of 30 instead of the default of 10. We see that the first effect already leads to a double-digit slowdown below 10kB — well before filling the congestion window.

The TLS handshake is just one step in a long chain required to show you a webpage. Casually browsing, it would be hard to notice a TLS handshake that’s 60% slower. But such differences add up. To make a website really fast, you need many seemingly insignificant speedups. Browser developers take this seriously: only in exceptional cases [8] does Chrome allow a change that slows down a microbenchmark by even a percent.

Because of the many parties and complexities involved, we should avoid waiting too long to adopt post-quantum signatures in TLS. That’s a hard sell if it comes at the price of a double-digit slowdown, not least to content servers but also to browser vendors and clients.

A timely adoption of PQ signatures on the web would be great. Our data so far suggests that this will be easiest, if six signatures and two public keys would fit in 9kB.

Best,

Bas Westerbaan
Cloudflare Research


[0] See, for instance
https://radar.cloudflare.com/
https://transparencyreport.google.com/https/overview
https://web.archive.org/web/20201111210500/https://netmarketshare.com/report.aspx?options=%7B%22filter%22%3A%7B%7D%2C%22dateLabel%22%3A%22Custom%22%2C%22attributes%22%3A%22share%22%2C%22group%22%3A%22secure%22%2C%22sort%22%3A%7B%22share%22%3A-1%7D%2C%22id%22%3A%22https%22%2C%22dateInterval%22%3A%22Monthly%22%2C%22dateStart%22%3A%222019-10%22%2C%22dateEnd%22%3A%222019-10%22%2C%22segments%22%3A%22-1000%22%7D
[1] Also DNS needs an upgrade, but we’ll leave that for another time.
[2] https://thomwiggers.nl/project/kemtls/
[3] https://www.amazon.science/publications/speeding-up-post-quantum-tls-handshakes-by-suppressing-intermediate-ca-certificates
[4] Sikeridis, Kampanakis, Devetsikiotis. Assessing the overhead of post-quantum cryptography in TLS 1.3 and SSH. CoNEXT’20.
[5] Paquin, Stebila, Tamvada. Benchmarking Post-Quantum Cryptography in TLS. PQCrypto 2020.
[6] Sikeridis, Kampanakis, Devetsikiotis. Post-Quantum Authentication in TLS 1.3: A Performance Study. NDSS2020.
[7] And just a few days ago: https://eprint.iacr.org/2021/1447
[8] https://chromium.googlesource.com/chromium/src/+/refs/heads/main/docs/speed/addressing_performance_regressions.md#if-you-believe-the-regression-is-justified
[9] To appear at https://blog.cloudflare.com/sizing-up-post-quantum-signatures

Sizing Up PQ-Signatures, Summary of Results.pdf

Blumenthal, Uri - 0553 - MITLL

unread,
Oct 31, 2021, 4:18:45 PMOct 31
to Bas Westerbaan, 'Dan Brown' via pqc-forum

Wouldn’t Signature-less KEM be a better choice for TLS, than its current typical approach of six signatures, at least some of which involve both signing and verifying dynamically (per connection)?

--

Regards,

Uri

 

There are two ways to design a system. One is to make is so simple there are obviously no deficiencies.

The other is to make it so complex there are no obvious deficiencies.

                                                                                                                                     -  C. A. R. Hoare

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/C75C0CDC-AC5E-4338-B13C-68E5FB91218C%40westerbaan.name.



--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/C75C0CDC-AC5E-4338-B13C-68E5FB91218C%40westerbaan.name.

Bas Westerbaan

unread,
Oct 31, 2021, 5:31:35 PMOct 31
to Blumenthal, Uri - 0553 - MITLL, 'Dan Brown' via pqc-forum
The handshake signature can be replaced by a KEM, which results in KEMTLS; see [2] in my previous mail. Adopting KEMTLS, however, could add years to the transition.

I do not know of a practical way to replace the remaining signatures with KEMs.

Best,

 Bas


Blumenthal, Uri - 0553 - MITLL

unread,
Oct 31, 2021, 6:16:11 PMOct 31
to Bas Westerbaan, 'Dan Brown' via pqc-forum
With KEMTLS each peer only needs to verify one signature (over static key/very). I daresay it’s tolerable. 

Regards,
Uri

On Oct 31, 2021, at 17:31, Bas Westerbaan <b...@westerbaan.name> wrote:



Bas Westerbaan

unread,
Nov 1, 2021, 4:55:24 AMNov 1
to Blumenthal, Uri - 0553 - MITLL, 'Dan Brown' via pqc-forum

On 31 Oct 2021, at 23:16, Blumenthal, Uri - 0553 - MITLL <u...@ll.mit.edu> wrote:

With KEMTLS each peer only needs to verify one signature (over static key/very). I daresay it’s tolerable. 

This would be the case when replacing plain TLS as used, for instance, internally in a company.

However, TLS as used on the Web (web-browsing,) requires more signatures:

1. Typically there is at least one intermediate certificate in the chain.
2. There are at least two mandatory Signed Certificate Timestamps to prove the leaf certificate is included in Certificate Transparency logs.[1]
3. There is an OCSP staple used to prove the leaf certificate isn’t revoked.[2]

Hence KEMTLS only reduces the number of signatures from 6 to 5.

Best,

 Bas


D. J. Bernstein

unread,
Nov 1, 2021, 6:08:36 AMNov 1
to pqc-...@list.nist.gov
Bas Westerbaan writes:
> However, TLS as used on the Web (web-browsing,) requires more signatures:
> 1. Typically there is at least one intermediate certificate in the chain.
> 2. There are at least two mandatory Signed Certificate Timestamps to prove the
> leaf certificate is included in Certificate Transparency logs.[1]
> 3. There is an OCSP staple used to prove the leaf certificate isn’t revoked.[2]
> Hence KEMTLS only reduces the number of signatures from 6 to 5.

Long-term signatures can be efficiently distributed through DNS caches,
as in Section 3.4 of https://dl.acm.org/doi/10.1145/2508859.2516737.
Essentially all of the long-distance transmission is then replaced with
much faster lookups from the local ISP.

I agree that this is going beyond what KEMTLS does. I also agree with
your general comment that "such changes could take years to adopt".
There's a risk that focusing purely on the desired end scenario will
lose the race against attackers building quantum computers. Certainly
we should understand whether the costs of post-quantum signatures will
prevent an easy rollout within the TLS details that exist today.

Regarding your graphs, the median delay looks like about 8ms added for
each 10 kilobytes, meaning about 10Mbps, while the top-quartile delay
looks like about 20ms added for each 10 kilobytes, meaning about 4Mbps.
It'd be great to see more data points and some historical data to get an
idea of how the network speeds are evolving. I presume Cloudflare is
continually collecting metrics on bulk-download speeds.

---Dan

P.S. The statement "only in exceptional cases [8] does Chrome allow a
change that slows down a microbenchmark by even a percent" doesn't
appear to be correct. The cited Chrome page

https://chromium.googlesource.com/chromium/src/+/refs/heads/main/docs/speed/addressing_performance_regressions.md#if-you-believe-the-regression-is-justified

doesn't say "exceptional"; on the contrary, it spends several paragraphs
listing "some common justification scenarios", including "What do we
gain? It could be something like: ... Additional security".
signature.asc

Mike Hamburg

unread,
Nov 1, 2021, 11:55:39 AMNov 1
to D. J. Bernstein, pqc-...@list.nist.gov
Hi all,

Possibly this is getting off-topic, but...

I’m also hoping that the OCSP stapling can be replaced with CRLite eventually, where a compressed CRL is distributed through your favorite CDN, whether that’s Cloudflare or your ISP’s caches or whatever.  Firefox has already deployed this, and I don’t see a good reason not to roll out CRLite or some variant of it to nearly every web transaction instead of OCSP stapling.  OCSP (stapling) would then become the uncommon case.

Basically you can replace the staples by consulting a ~1-2 MB compressed CRL on disk, plus ~10-20kB in updates on a typical day, depending on the exact strategy.  Given the size of post-quantum signatures, this would be a big improvement over OCSP.  Better data structures are now available since the original CRLite, which can further reduce file sizes and query times.  The size will increase as the web grows, but it may also shrink as sites transition to shorter-lived certs, because it counts only unexpired revoked certs.  It can balloon up to tens of megabytes for one update period if the web’s PKI explodes again, as it did with Heartbleed.

Not that this solves the problem immediately, but at least we have a proven and deployed way forward.

Cheers,
— Mike

Structures:
I’m cooking one too.


--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Bas Westerbaan

unread,
Nov 9, 2021, 5:56:18 AMNov 9
to pqc-forum, mi...@shiftleft.org, pqc-...@list.nist.gov, D. J. Bernstein
On Monday, November 1, 2021 at 4:55:39 PM UTC+1 mi...@shiftleft.org wrote:
Hi all,

Possibly this is getting off-topic, but...

I’m also hoping that the OCSP stapling can be replaced with CRLite eventually, where a compressed CRL is distributed through your favorite CDN, whether that’s Cloudflare or your ISP’s caches or whatever.  Firefox has already deployed this, and I don’t see a good reason not to roll out CRLite or some variant of it to nearly every web transaction instead of OCSP stapling.  OCSP (stapling) would then become the uncommon case.

Basically you can replace the staples by consulting a ~1-2 MB compressed CRL on disk, plus ~10-20kB in updates on a typical day, depending on the exact strategy.  Given the size of post-quantum signatures, this would be a big improvement over OCSP.  Better data structures are now available since the original CRLite, which can further reduce file sizes and query times.  The size will increase as the web grows, but it may also shrink as sites transition to shorter-lived certs, because it counts only unexpired revoked certs.  It can balloon up to tens of megabytes for one update period if the web’s PKI explodes again, as it did with Heartbleed.

Not that this solves the problem immediately, but at least we have a proven and deployed way forward.

I agree. Also then, OCSP soft fail can be reconsidered.

Together with out-of-band distribution of intermediates [1] this leaves 4 signatures and 1 public key, which helps (and I think we should pursue it), but it doesn't fundamentally change the situation.

Also, not every HTTPS client is a (standalone) browser. We could move the CRL and intermediates to the OS, but that's not an overnight change.

While we're off-topic, allow me a less practical tangent: if we also distribute certificate transparency's STH out-of-band, then an authentication path suffices to show a (pre)certificate is included. This would require server operators to wait a while before rolling out new certificates. Disregarding the lead-time, we can take another step: if the leaf cert is included in the CT logs, why bother checking its signature? Then, changing the format of certificates we could leave the intermediate-on-leaf-signature out (replacing it with a hash) for the cert sent to the client.

Best,

 Bas

 


Peter Schwabe

unread,
Nov 11, 2021, 1:11:21 AMNov 11
to Bas Westerbaan, pqc-...@list.nist.gov
Bas Westerbaan <b...@westerbaan.name> wrote:

Dear Bas, dear all,

> TLDR
>
> 3. Because of the lead time, we shouldn’t wait too long to adopt PQ
> signatures on the web. It would be ideal if these six signatures and
> two public keys would fit together within 9kB.

How many of the offline signatures could realistically be stateful
hash-based signatures?

All the best,

Peter

Bas Westerbaan

unread,
Nov 11, 2021, 7:10:57 AMNov 11
to Peter Schwabe, Bas Westerbaan, pqc-...@list.nist.gov
>   3. Because of the lead time, we shouldn’t wait too long to adopt PQ
>   signatures on the web. It would be ideal if these six signatures and
>   two public keys would fit together within 9kB.

How many of the offline signatures could realistically be stateful
hash-based signatures?

All of the offline signatures, but not without adding some serious lead time.

The first issue is that we need to standardise some new parameters. Let's Encrypt issues 2M certificates per day.[1] An LMS_M24_H25_W8 intermediate would only last 7 days and requires 1.2kB for the signature.[2] That might not be workable. Truncating hashes to 16 bytes[3] with a tree of height 30, we get signatures of around 768 bytes of an intermediate which lasts more than a year. Generating that keypair takes about 4 days using SHAKE-128 on a single core assuming 75ns per f1600. After that, signing will be very quick if you cache[4] the 32GB merkle tree. Of course we'll need suitable HSMs.

The elephant in the room is keeping the state: the number of the first unused signature. That's challenging, but not as challenging as one might think considering the following.

1. The state shouldn't be seen as part of the private key. If you do see it as part of the private key, then you'll worry about a private key being restored from a backup. The state is not sensitive: it could be kept publicly even on a distributed system.
2. The state doesn't have to be kept perfectly: only monotonically. If there is disagreement, just pick the largest one to be safe.

Clearly, this is far from a drop-in replacement on the CA/CT log-side.

Best,

 Bas

[3] With n=16 the signer can find with 2^64 work a single signature that signs two different messages. As signatures are used in TLS, this is not an issue.
[4] The merkle tree is not secret, so no need for 32GB of special memory.

D. J. Bernstein

unread,
Nov 11, 2021, 12:38:06 PMNov 11
to pqc-...@list.nist.gov
'Bas Westerbaan' via pqc-forum writes:
> Together with out-of-band distribution of intermediates [1] this
> leaves 4 signatures and 1 public key, which helps (and I think we
> should pursue it), but it doesn't fundamentally change the situation.

Going a step further and looking up all of the long-term signatures in
DNS fundamentally changes the situation. The most popular signatures are
then automatically cached locally at the ISP for quick lookups by all of
the ISP's users. Often user devices will have their own DNS caches, and
the lookups of the same signature on that device will be even faster.

The reason to take specifically DNS rather than some ad-hoc associative
array is that DNS is the Internet's existing distributed caching system.
This eliminates issues of OS support and application coordination: the
DNS caches and DNS lookup mechanisms are already available, and one
simply has to use them.

One very easy way to integrate this into existing protocols is to
specify a new DNS-hash signature type that simply sends a 32-byte hash
of a signature. The device then looks up the 32-byte hash in DNS to
obtain the full signature.

(The full DNS name being looked up will be, e.g., a 51-byte text
encoding of the 32 bytes, plus a protocol identifier, plus the domain
name that the protocol is working with. For parallel retrieval of
several small packets concatenated to obtain a full signature, it will
be useful to allocate 1 extra byte in the DNS-hash signature to say the
number of packets being retrieved; the DNS name being looked up will
then include a counter. The full signature can have more type data.)

The same principles also apply to other long-term data such as public
keys and certificate metadata. Even short-term data, such as Google's
ephemeral key for the next 2 minutes, can be usefully cached to some
extent, especially at the ISP level.

The literature that I cited before shows how to obtain even better
latency---at the cost of more protocol modifications---by overlapping
the necessary lookups with the DNS lookups that are done anyway. Even
without this improvement, the latency can easily be lower than current
TLS whenever the DNS cache is hit.

It's good to understand the costs of every TLS server sending public
data to every TLS client. It's also good to understand how much easily
removable redundancy there is in the system. We want to get things
rolled out asap, but we also don't want short-term speed considerations
to be damaging everyone's long-term security.

---Dan
signature.asc

Kampanakis, Panos

unread,
Nov 11, 2021, 1:29:53 PMNov 11
to D. J. Bernstein, pqc-...@list.nist.gov
> (The full DNS name being looked up will be, e.g., a 51-byte text encoding of the 32 bytes, plus a protocol identifier, plus the domain name that the protocol is working with. For parallel retrieval of several small packets concatenated to obtain a full signature, it will be useful to allocate 1 extra byte in the DNS-hash signature to say the number of packets being retrieved; the DNS name being looked up will then include a counter. The full signature can have more type data.)

Unfortunately, this is not DNS any more, it is a new protocol that would require many years for deployment.
And there are a few ways it can break. DNSSEC cannot use any of the PQ Sig candidates due of the same reasons.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20211111173740.574204.qmail%40cr.yp.to.

D. J. Bernstein

unread,
Nov 11, 2021, 2:57:42 PMNov 11
to pqc-...@list.nist.gov
NISTPQC should be looking at what can be easily rolled out today, but
should also be looking ahead 5 years, 10 years, 15 years, etc. Look at
how much changed from TLS 1.1 in 2006 to TLS 1.2 in 2008 to TLS 1.3 in
2018! To reiterate: we don't want to lose the race against attackers
building quantum computers; we also don't want short-term speed
considerations to be damaging everyone's long-term security.

'Kampanakis, Panos' via pqc-forum writes:
> Unfortunately, this is not DNS any more, it is a new protocol that
> would require many years for deployment.

It's most certainly DNS. There are constant experiments with sending new
types of data through DNS, and the most useful experiments end up being
widely used. Try typing "dig txt google.com" from a command line and
figuring out when those applications of DNS were introduced. I agree
that improvements could take years to adopt, but saying that a simple
use of DNS "would require many years" is exaggerating the difficulty.

> And there are a few ways it can break. DNSSEC cannot use any of the PQ
> Sig candidates due of the same reasons.

There are many bigger things than post-quantum cryptography that DNSSEC
is unable to do because it starts from an artificially limited view of
(1) the problem space and (2) the solution space. (See, e.g., the series
of 15 "DNS security mess" talks on https://cr.yp.to/talks.html.) So it's
circular to refer to DNSSEC as an argument that something doesn't work.

---Dan
signature.asc

John Mattsson

unread,
Nov 12, 2021, 9:06:19 AMNov 12
to pqc-...@list.nist.gov

With DNS Queries over HTTPS (DoH) it has become/will be a bit easier to introduce new types of data in DNS. An application can chose to use a DNS resolver that supports the DNS extention it needs.

 

https://en.wikipedia.org/wiki/DNS_over_HTTPS

circular to refer to DNSSEC as an argument that something doesn't work.

 

---Dan

 

--

You received this message because you are subscribed to the Google Groups "pqc-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Kampanakis, Panos

unread,
Nov 12, 2021, 10:25:47 AMNov 12
to John Mattsson, pqc-...@list.nist.gov

Indeed.
But the panel discussion from NISTS 3rd PQ Conference https://www.nist.gov/video/third-pqc-standardization-conference-session-v-applications (13:40– 14:20) talks about some scalability concerns with DoH/TCP/TLS for some DNS Operators.

 

 

From: 'John Mattsson' via pqc-forum <pqc-...@list.nist.gov>
Sent: Friday, November 12, 2021 9:06 AM
To: pqc-...@list.nist.gov

Reply all
Reply to author
Forward
0 new messages