Hi Vadim,
Regarding your consideration 1: I don’t believe we would rate a hybrid of a category 2 postquantum scheme and a 256-bit classically secure scheme as meeting category 5 postquantum security. First of all, it does not protect nearly as well as a category 5 scheme against a scenario where quantum computers advance to the point where they can just barely do Shor’s algorithm on cryptographically interesting targets, but classical computers advance to the point where they can realistically do 2^128 AES operations. Moreover, even ignoring the entirely plausible possibility that quantum operations may turn out to always be significantly more expensive than the equivalent classical operations, category 5 places a higher requirement on quantum security than category 2, not just classical. This is due to the parallelism issue which is noted in our call for proposals:
“It is worth noting that the security categories based on these reference primitives provide substantially more quantum security than a naïve analysis might suggest. For example, categories 1, 3 and 5 are defined in terms of block ciphers, which can be broken using Grover’s algorithm, with a quadratic quantum speedup. But Grover’s algorithm requires a long-running serial computation, which is difficult to implement in practice. In a realistic attack, one has to run many smaller instances of the algorithm in parallel, which makes the quantum speedup less dramatic.”
Rather, I would see the value of hybridizing a postquantum scheme with ECC as protection in the scenario where quantum computers completely fizzle, but there is a major advance in the cryptanalysis of the postquantum scheme. This would not be reflected in the security strength categories as it’s very difficult to numerically capture the risk of unknown cryptanalytic advances even in a coarse grained scale like our 5 security strength categories.
This brings us to your consideration 2: I see combinations like the one described in your consideration 2 as providing value in much the same way, for the case where quantum computers don’t completely fizzle. These combinations would only require one of the component algorithms to be FIPS Approved in order to be permissible under FIPS, (and it should already be possible to get FIPS approval for such a combination if it also includes something NIST has already approved like ECDH.) So nothing NIST is planning to do should prevent the use of these diverse hybrid cryptosystems. Nonetheless, it would be interesting to hear community feedback about whether NIST should do something more than the minimum we’ve promised (at least one standard each for a postquantum PKE/KEM and a postquantum signature) to actively encourage the use of more diverse hybrid modes.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
pqc-forum+...@list.nist.gov.
To view this discussion on the web visit
https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAE158z_E7UZEacORoJZJrZLvZ8Jsip%2B731eW%2Be%2BvgerTykzk%3Dg%40mail.gmail.com.
I agree with Ray. The hybrid shall target on a clear security goal. I like to clarify a few things about hybrid mode.
People from CMVP testing Labs and also from vendors often approached us to ask different variations. I am sure that hybrid mode will introduce more interesting variations. But for interoperability and simplicity, we probably have to limit the options to hybridizing different schemes.
Lily
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/BL0PR0901MB4164D13CAD005E73BA372A859CA30%40BL0PR0901MB4164.namprd09.prod.outlook.com.
First of all, it does not protect nearly as well as a category 5 scheme against a scenario where quantum computers advance to the point where they can just barely do Shor’s algorithm on cryptographically interesting targets, but classical computers advance to the point where they can realistically do 2^128 AES operations.
This brings us to your consideration 2: I see combinations like the one described in your consideration 2 as providing value in much the same way, for the case where quantum computers don’t completely fizzle. These combinations would only require one of the component algorithms to be FIPS Approved in order to be permissible under FIPS, (and it should already be possible to get FIPS approval for such a combination if it also includes something NIST has already approved like ECDH.) So nothing NIST is planning to do should prevent the use of these diverse hybrid cryptosystems.
--Nonetheless, it would be interesting to hear community feedback about whether NIST should do something more than the minimum we’ve promised (at least one standard each for a postquantum PKE/KEM and a postquantum signature) to actively encourage the use of more diverse hybrid modes.
From: Vadim Lyubashevsky <vadim....@gmail.com>
Sent: Wednesday, August 28, 2019 1:59 PM
To: pqc-forum <pqc-...@list.nist.gov>
Subject: [pqc-forum] hybrid mode and the issues it raises
Dear all,
At the industry panel of the 2nd NIST PQC conference, there seemed to have been a consensus agreement that a hybrid mode (in which a post-quantum scheme is combined with an EC scheme so that the resulting scheme is at least as classically secure as the EC one) should be the default option for the foreseeable future. And I don't think anyone disagreed and it seemed like NIST actually wanted the community to go ahead and agree on a combiner for the hybrid. If we seriously start considering hybrid modes (and I think we should), then I think that this is a game-changer for the standardization process in at least three ways (they are kind of obvious, but haven't been discussed at the conference):
1. Reasoning for the definitions of the NIST security levels. NIST security level 5, for example, is defined as being at least as hard as AES, quantumly and classically. This, as I understand it, means to be 128-bit secure quantumly and 256-bit secure classically. All submissions have some gap between their quantum and classical securities, but it's never as large as that of AES. Lattice schemes, for example, have security gaps of just a dozen or so bits. So to claim level 5, schemes set parameters to be 256-bit secure classically. But someone clever/sneaky could have done the following: constructed a scheme that was only 128-bit quantum secure and then used it in hybrid mode with a 512-bit EC scheme (ECDH or ECDSA). At the cost of 64 bytes in the public key and output, which is much less that what one would need to add to achieve 256-bit classical security with just the post-quantum scheme, one gets level 5. As an example, if I look at our CRYSTALS-Dilithium signature scheme, this trick gets us from Level 2 to Level 5 virtually for free (size-wise).
I don't think anyone did this (or did they?) possibly because this is against the spirit of what we're trying to do, which is to create a secure standalone scheme. But if the hybrid mode is the default option and its purpose is exactly to provide classical security, then why should we care about the classical security of our post-quantum algorithms? I would suggest that for the third round, a better way to measure the submissions is to only concentrate on the quantum hardness and to ask for, e.g. 96, 128, 160, 192 bits of quantum security. The only reason I see for also asking for classical hardness of the submissions is if we don't have faith in the classical hardness of EC primitives ... but I don't know what the feelings of people are there.
2. Early standardization of "safe" primitives. There was also some discussion about standardizing a scheme that's not too efficient, but in which we have a lot of confidence. If we talk about KEMs, then if someone asked me what's the KEM that I trust most, without hesitation I'd pick the pure-LWE "Frodo" (and I will not argue much if someone says McEliece). Personally, I have more faith in the classical security of that scheme than in RSA. So clearly, I should vote that we should standardize that one first, right? But ... if we're allowing for hybrids, then at half the size of one Frodo key exchange, I can do e.g. Kyber+NTRUPrime+Rollo+SIKE (the latter makes things slower, so leave it out if you wish)+some other code-based scheme. This hybrid is smaller than Frodo and I now have more confidence in the security of (at least) one of the primitives comprising it than in the security of just Frodo. I think it was Scott Fluhrer who remarked that he thinks McEliece and SPHINCS+ will be standardized, but will never be used (because of their inefficiency). So if we consider hybrids, is there still a reason to standardize a KEM that is much less efficient than a combination of more compact KEMs based on different assumptions (this is not a rhetorical question)? Why not standardize efficient KEMs which we will only use in hybrid mode for a while, and then perhaps remove some as time progresses if we're confident in the security of the remaining ones. Of course we should also have some efficient stand-alone schemes in which we are confident for situations where the device is not powerful enough to run the whole hybrid and has to rely on just one scheme. With SPHINCS+ (and Picnic), the situation is different since they are based on the security of symmetric primitives -- so arguably no combination of mathematical assumptions is going to give us more confidence in the security of the resulting signature.
3. (Not) standardizing untested submissions. Some of the entrants remaining in the second round are based on rather new assumptions - e.g. the compact code-based schemes or the multivariate LUOV signature. It seems unlikely to me that any of these could get standardized as stand-alone schemes if the process were to end in the next few years. But maybe it still makes sense to use them in hybrid mode with other schemes that are more tested. Adding them may provide added security as in the example above.
I would be very curious to hear NISTs and the community's opinions about these (or maybe other) effects of including a hybrid mode on the standardization process.
Thanks (if you read this far :) ),
Vadim
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAE158z_E7UZEacORoJZJrZLvZ8Jsip%2B731eW%2Be%2BvgerTykzk%3Dg%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/BL0PR0901MB4164D13CAD005E73BA372A859CA30%40BL0PR0901MB4164.namprd09.prod.outlook.com.
Vadim Lyubashevsky writes:
> taking a hybrid of three future efficient round 3 KEMs should be more
> secure than any one scheme that NIST chooses to standardize early
Consider an application integrating mceliece6960119. Can you please name
specific parameter choices for three KEMs that you consider "efficient",
and spell out the details of why you think a hybrid of those three KEMs
with those parameter choices would be better than mceliece6960119?
Being concrete is an important step towards falsification. I expect that
followup analyses will show that your specific proposal simultaneously
increases costs _and_ risks for cryptographic users.
Sorry for the delayed response. Anyone can look at the parameters for McEliece/Kyber/Frodo/SIKE/etc. and figure out whether they are happy with the public key/ciphertext /run-time trade-offs. These numbers speak for themselves and I have nothing to add to the discussion between practitioners who will be potentially putting these schemes into use. So if someone wants to use McEliece because they deem it optimal for their application, then fine. But there are also many scenarios where using McEliece is not optimal, yet it feels to me that some people would still lean towards using it because of the often-perpetuated narrative that McEliece is the gold standard for post-quantum security. The only point that I wanted to make is that I personally disagree with this, and especially so if one considers hybrids.
Sure, McEliece is as old as RSA, but its study has almost exclusively been confined to academia (and a niche area of academia, at that). There is no precise way to measure how well something has been studied in academia, but citation count is probably the least crude metric that exists. So here are RSA and some papers introducing various PQ schemes:
1977. “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems.” RSA – 23600 citations (4100 since 2015)Of course you may argue that the LWE paper didn’t have any concrete parameters, and so there wasn’t any possible cryptanalysis to speak of. And it’s true, lattice cryptography (except for NTRU), developed from the “top-down”, meaning that people wanted to understand the general problem and the cryptosystems came as particular instantiations. This method has many advantages, with the biggest, in my view, being that it attracts people from outside cryptography to work on cryptographically relevant problems. There have indeed been some valuable and fundamental lattice algorithms / techniques developed by non-cryptography people. Examples include the original sieving algorithm (Ajtai, Kumar, Sivakumar) which is now a core component for setting concrete parameters, the BKW algorithm (Blum, Kalai, Wasserman), and the Arora-Ge algorithm (Arora, Ge) which precludes certain parameter sets in symmetric-based LWE crypto.
I think that keeping the cryptographic algorithms as general and natural as possible, and not just restricting to the parameters that optimize a cryptosystem, is an extremely valuable feature. This is why, if we try to predict which scheme will receive the most scrutiny (even if indirect) if standardized, I would say LWE is the number one choice, Ring-LWE is next, Ring-LWR follows (rounding instead of adding errors is quite unnatural, and probably only cryptanalysts will be looking at that feature), then NTRU (Ring-LWE/LWR schemes can be used for advanced primitives, like FHE, whereas NTRU cannot since it is insecure in that parameter range; and a segment of the AI community is now starting to look at FHE) and McEliece.
And if we consider SIKE (which hasn’t been studied as extensively yet, but with the entire ECC research community seemingly working on it, in 5 - 10 years, it could become considered to be pretty well-studied), I would contend that a hybrid of Ring-LWE and SIKE should give people as much confidence as anything else in the near future.
Vadim, thanks for elaborating upon your security argument. Can you
please name a specific hybrid, including specific parameters,
* that you would recommend in place of mceliece6960119 and
* that you believe is supported by this security argument?
Again, I predict that the result of this will be followup analyses
showing that this hybrid increases costs _and_ risks.
Vadim Lyubashevsky writes:
> Sure, let's take SIKEp751 + Kyber1024
Thanks. To make sure I have this straight: Your starting hybrid question
was whether there is "a reason to standardize a KEM that is much less
efficient than a combination of more compact KEMs based on different
assumptions". To be specific, you're now claiming that
* sikep751+kyber1024 would be more efficient (rather than less
efficient) than mceliece6960119, and that
* sikep751+kyber1024 would reduce risks (rather than increase risks)
compared to mceliece6960119, for the reasons explained in your
previous message.
Did I get that right? If there's a followup paper analyzing the costs
and risks of sikep751+kyber1024 vs. mceliece6960119, you won't say that
this paper is addressing the wrong question?
---Dan
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20191002073238.28361.qmail%40cr.yp.to.
A web page can involve multiple public keys, but a key is often reused
for many pages. If the average number of keys retrieved per page is,
say, 0.1, and we add 1MB to upgrade _every_ key for _every_ page to
Classic McEliece, then the total increase in Internet traffic is around
0.8%. For comparison,
The number of _ciphertexts_ communicated by the ISP is a different
story. For each site, each user needs a new ciphertext communicated end
to end to set up a session key. Thousands of users of this ISP who talk
to Google during this 10-minute period can happily download the same
10-minute Google public key through an ISP cache, but each user has to
send Google a separate ciphertext under that key. The cryptographic data
communicated between Google and the ISP is then
* one key plus
* thousands of ciphertexts.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20191005131216.13046.qmail%40cr.yp.to.
etc. Of course the standard "economy of scale" arguments say that the
last mile is also the most expensive mile to upgrade from copper to
fiber---but the transition is happening anyway, clearly driven by users
wanting more data than ISPs can squeeze out of the installed copper.SpaceX low-orbit satellites
Let's assume that SpaceX manages to deploy all 11943 satellites by 2027,
that each satellite works at the rated 23Gbps, and that each satellite
is magically being used full-time. That's still not enough capacity to
handle _today's_ Internet traffic. Furthermore, I'm not aware of any
numbers suggesting that SpaceX will be competitive in cost with
traditional ISPs even if we ignore SpaceX's obvious inefficiencies in
caching. The argument that SpaceX actually makes in, e.g.,
https://cdn.arstechnica.net/wp-content/uploads/2016/11/spacex-Legal-Narrative.pdf
is that "many parts of the United States and the world lack access to
reliable broadband connectivity".
I'm not arguing that SpaceX should be excluded from the list of target
networks for future cryptography. I think it's useful to consider how
cryptography can squeeze into narrow corners of various networks. But
it's wrong to demand that _all_ NISTPQC options be designed this way.
On the contrary, I’m not sure how you’re getting u = 1/2000.
By "getting" do you mean "describing how this can happen" (which I've
done in considerable detail), or "claiming that this is the only
possibility" (which I've never done)?
I'm not arguing that NISTPQC should limit attention to a particular
deployment scenario. On the contrary, I'm _objecting to_ arguments that
NISTPQC should limit attention to a particular scenario.
In particular, there's an unjustified claim that McEliece uses more
bandwidth than (e.g.) NewHope---as part of an even more extreme claim
that the "inefficiency" of McEliece will stop it from being used. These
claims should never have been issued without explicit statements of the
underlying assumptions about u (and k).
How is loading from an ISP cache 2000x more cost-efficient or
energy-efficient than loading from the general internet?
You mean, how _can_ loading from an ISP cache be 2000x more efficient?
Answer: consider an ISP cache that's thousands of times closer than the
server.
How are you even hitting cache 99.95% of the time?
You mean, how _can_ the cache-hit ratio be 99.95%? Answer: consider
thousands of users of the ISP who want the same ephemeral key because
they're accessing the same site during the same forward-secrecy period.
Yes, you can use the same public key for multiple connections, but
usually it’s easier to use it once and then use the exchanged
symmetric key for multiple connections
Throughout this discussion I've been assuming normal use of symmetric
crypto by each user, including normal caching of symmetric keys to use
for many messages, which is what you're talking about here.
Looking beyond this single-user perspective shows that _there are many
users_. These users should not share symmetric keys. They _can_ and
_should_ share the work of retrieving public keys, producing the cost
reduction that I've been talking about for public keys---showing that
the "key size + ciphertext size" metric is oversimplified.As for downloading certs on wifi, are we still talking about 10-minute
keys?
I described this as a variable from the outset ("period of (say) 10
minutes"). I also quoted Langley's "once a day" goal---where, evidently,
once-per-day prefetching would cost you just 1MB of storage per site.
For comparison, the app that I upgraded a moment ago was 140MB.
the top 100 sites on the internet
Do you think there are 100 different sites that you frequently visit
from your smartphone? To me that sounds like a lot, but I haven't been
able to find measurements of typical smartphone behavior.
It's much easier to find numbers regarding the growth of device storage:
e.g., 16GB for the largest iPhone in 2007 vs. 512GB for the largest
iPhone today. Is it hard to imagine an iPhone with 16TB of storage in,
say, 2027, just a few years after NIST issues post-quantum standards?
That’s still going to be a meaningful cost
If we ask ten different pqc-forum participants for the dividing line
between "meaningful" and non-"meaningful" costs then we'll get ten
different answers. If you think you have a solid objective argument then
why do you resort to unclear, judgmental terminology?
Sure, OK, your phone could learn from your usage patterns to better
predict which certs to download
Aha! Now I see one of the underlying problems in this discussion.
You seem to be thinking of caching, "link-cost-sensitive prefetch", etc.
as hypothetical tools that we _could_ theoretically build to reduce the
costs of transporting these 1MB McEliece keys. You complain about the
effort to build these tools. You seem to think that this effort is a
"cost" of using McEliece, and that this justifies making claims about
network traffic under implicit assumptions that these tools won't exist.
I'm instead seeing caching, prefetching, etc. as general-purpose tools
that are already built into the Internet infrastructure today, at every
level from CDNs down to your phone (which, yes, already pays attention
to the link costs). There's no reason that we shouldn't use the same
tools to transport public keys. This undermines the claims about network
traffic---and enables better solutions for future cryptographic users.
I have a suggestion for NIST: Why not ask NIST's Advanced Network
Technologies Division for help in evaluating the network-performance
aspects of NISTPQC? Explaining the cryptographic sizes and data flow to
them is surely much less effort than explaining to cryptographers how
the Internet works, while there are other aspects of NISTPQC that we
really need cryptographers to be focusing on.
---Dan
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20191009220710.32314.qmail%40cr.yp.to.
- IETF has been strongly striving for ephemeral key per TLS connection and strongly opposed static/long-lived keys for any usecase.Before this you seemed to be arguing that NISTPQC should limit attention to what's currently standardized and deployed, except for swapping out the crypto primitives. Here, however, you're arguing that NISTPQC should _ignore_ the installed base of working software (from, e.g., Microsoft) that does _not_ do what you're advocating, contrary to what a reader might expect from the word "strongly".
...
A couple additional Ops challenges with what you are proposing are - With TLS 1.3 and ESNI, an ISP cache would not know what public key to serve to a client as everything is encrypted. Serving a public key based on destination address will not work because CDNs serve multiple destinations from the same location.
... The TLS data flow makes this seem like an annoying conflict between functionality (hosting multiple sites on one server) and security (encrypting the site name). This is a fixable flaw in TLS.