1) The difficulty of parallelizing variants of Grover's algorithm and
2) The relative cost of classical vs quantum gates.
However, since most quoted figures in the literature for quantum "bits of security" don't take these things into account, we feel it was a mistake to use that language to describe what we were asking for.
Our current plan shares much with the previous approach. We still think it's reasonable to categorize submitted parameter sets into 5 rough security strength categories, where categories 1,3, and 5 are at least as hard to break as AES128, AES192, and AES256, respectively, and categories 2 and 4 are at least as hard to break as SHA256 and SHA384 respectively. However, we don't necessarily think that quantum security can really be captured by a single number: The practical cost of an attack will be parametrized at least by the maximum circuit depth that can be permitted by real world quantum gate times, and the relative cost of classical and quantum gates. So instead, our approach would be to say that, for any reasonable assumptions, regarding maximum circuit depth and relative quantum/classical cost, attacks against the schemes in a given security strength category should not be cheaper than attacks against the defining algorithm (e.g. something in security strength category 4 should be no easier to break, given any reasonable assumption, than SHA384.) For reference, we'd consider a maximum depth ranging from 2^40 to 2^90 logical gates, and a relative quantum/classical cost ranging from 1 to 2^40 to be reasonable.
We also wish to clarify that we do not, and did not intend to require that submitters provide different parameter sets for all 5 security levels. In our view, a parameter set with a higher security strength automatically satisfies the requirements for any of the lower security strengths. Our current suggestion is that submitters provide at least one parameter set meeting or exceeding security strength 4 or 5, and as many additional parameter sets as the submitter believes are necessary to take advantage of any security/performance tradeoffs offered by the design approach.
To demonstrate what NIST expects these security strength categories will mean for submitters trying to set their parameters, consider the following hypothetical scenario: Assume you are submitting a postquantum algorithm where there is only one tunable parameter, corresponding to classical security, and there are no quantum attacks, aside from generic ones (i.e. those based on Grover's algorithm and related stuff like amplitude amplification and quantum walks.)
Then, in order to meet security strengths 1,3,5, you simply need to set classical security to equal 128, 192, 256 bits respectively.
Meeting security strength 2 will require some amount of classical security between 128 and 192 bits, and meeting security strength 4 will require some amount of classical security between 192 and 256 bits. Whether the required amount of classical security is at the low or high end of this range will depend upon how well the classical attacks against your scheme Groverize (i.e. how effective are the generic techniques for decreasing the cost of the classical attacks, using quantum computers.) If they Groverize well, you will need a classical security strength on the high end of the range, and if they Groverize poorly, you will need a classical security strength on the low end of the range.
One possible change we may consider making to the current approach would be to eliminate the security strengths based on hash functions. This would simplify the security analysis somewhat, by effectively making generic quantum cryptanalysis irrelevant to our evaluation criteria. However, it would leave us with no way to give credit to algorithms, if the classical attacks against them are hard to Groverize. A number of commenters suggested making a change in the opposite direction. Some even suggested going so far as to treat an algorithm with 128 bits of classical security and no quantum speedup, as being equivalently strong to a 256-bit block cipher, since both have "128 bits of quantum security." We don't think this is reasonable. We can come up with plausible computation models where something with 192 bits of classical security and no quantum speedup might be as hard to break as AES 256 (and we can come up with plausible models where nothing with less than 256 bits of classical security is as hard to break as AES256) but we can't come up with a reasonable justification for treating something with much less than 192 bits of classical security as being as strong as AES 256.
Questions to all interested parties:
Does the current approach seem sound? What (if any) changes would you suggest?
Questions to potential submitters:
Do you feel you will be able to do the analysis we are requesting? Do you need more guidance?
Thanks,
Ray Perlner NIST
What makes this challenging is that we also want to fit within the
performance constraints of applications. Sometimes these constraints,
especially bandwidth constraints, are quite severe. Most of NIST's
submission requirements are clearly supporting the goal of providing
post-quantum security within performance constraints.
However, a few of the requirements in the first draft are distractions
from this goal. It will be highly unfortunate if the best submissions
are eliminated for lack of compliance with those requirements. It will
also be unfortunate if trying to compensate for this possibility takes
time away from designing and analyzing the best submissions.
Perlner, Ray (Fed) writes:
[ if there's nothing better than Grover ]
> Then, in order to meet security strengths 1,3,5, you simply need to set
> classical security to equal 128, 192, 256 bits respectively.
This is not simple. The difficulty of quantifying security levels for
public-key systems was the top issue in my comments on the first draft:
https://blog.cr.yp.to/20161030-pqnist.html
I chose _pre-quantum_ DSA as an illustrative example to make clear that
this difficulty is much broader than quantifying quantum speedups.
Furthermore, it is not useful to require each submission to have
parameters targeting >=2^256 pre-quantum security. Weakening this to
>=2^192 pre-quantum security would still be a distraction. The reality
is that
* we're still decades away from anyone doing 2^128 computations and
* anyone who really wants >256-bit ECC can already find it.
It's not even clear that users should reject a system that's breakable
in, say, 2^80 quantum operations---this depends very much on how
expensive and how parallelizable the quantum operations are---although
if quantum operations turn out to be cheap then higher security levels
will be important.
Suppose a system S targets >=2^96 post-quantum security by targeting
>=2^192 pre-quantum security and assuming a square-root quantum speedup.
This sounds like a perfect match for NIST's "security strength 3", the
same security level as AES-192. But now suppose the actual security
levels established by years of security analysis turn out to be
* S: 2^184 pre-quantum security and
* S: 2^128 post-quantum security.
The system will never be broken unless there are massive attack
speedups. Will NIST throw this system away because it's "breakable" in
"only" 2^184 operations? Why should the user care about this?
Meanwhile suppose another system T, slower than S, is submitted the same
way and turns out to have exactly NIST's envisioned
* T: 2^192 pre-quantum security and
* T: 2^96 post-quantum security.
System T is much riskier than system S, and yet it is preferred under
NIST's current rules. I don't understand the motivation for this.
Suppose the rules are changed to focus on what we actually want, namely
post-quantum security. Suppose another system U was originally thought
to have 2^96 post-quantum security but turns out to have
* U: 2^90 post-quantum security.
Will NIST throw system U away because it is below its originally
targeted security level? Why should the user care about the original
guess?
Meanwhile suppose another system V was submitted as having >=2^64
post-quantum security and turns out to have
* V: 2^80 post-quantum security
while being slower than U. This is clearly worse than U, and yet it is
preferred under NIST's current rules. I don't understand the motivation
for this either.
I would simply allow the post-quantum security and performance analysis
to proceed for S, T, U, and V, and use the resulting numbers to see that
S beats T and that U beats V. I don't see why pre-quantum security is
relevant here; it can certainly be useful as a tool for guessing
post-quantum security, but these initial guesses are a distraction once
we have more serious analyses.
I think it's reasonable for NIST to be announcing minimum and maximum
target post-quantum security levels in advance. Concretely, it's helpful
for submitters to know that
* the risk of 2^64 quantum operations is viewed as so significant
that anything with post-quantum security <2^64 will be thrown away,
while
* the risk of 2^128 quantum operations is negligible,
so that submitters don't have to waste time working out parameters that
are clearly too small or that are clearly overkill. But I don't see the
rationale for telling submitters that a strong system with 2^90
post-quantum security will be thrown away merely because its initial
guess was 2^96, or that a very strong system with 2^128 post-quantum
security will be thrown away merely because there's some 2^184
pre-quantum attack.
> However, it would leave us with no way to give credit to algorithms,
> if the classical attacks against them are hard to Groverize.
Focusing on what we care about, namely post-quantum security and
performance, will automatically reflect this extra strength as a higher
post-quantum security level for a given performance level.
> we can?t come up with a reasonable justification for treating
> something with much less than 192 bits of classical security as being
> as strong as AES 256.
Asking for something to be "as strong as AES-256", as measured by
_pre-quantum_ security levels, is a distraction from standardizing
_post-quantum_ cryptography. As I said before, anyone who really wants
>256-bit ECC can already find it.
---Dan
When I see a claim that some lattice-based scheme (I am just choosing
lattices because that's my area, but I think this probably extends to most
other public key schemes as well) has 219 bits of security, I am always
reminded of a story my high school chemistry teacher told us when
explaining the concept of significant digits:
Two men are walking past two fields of grazing cows, and one asks the other
as to how many total cows he thinks are in those two fields. The second
man immediately replies "1003". The first man is very surprised and asks
"How were you able to come up with such a precise estimate so quickly?" To
which the other replies, "well, there are 3 cows on one field, and about
1000 on the other."
The bottom line is that there are so many approximations being done and so
many assumptions made and so many lower-order terms being removed (usually
all is done in the attacker's favor, so the parameters that one gets are a
lower-bound) that it is a little strange to come up with such
precise-looking numbers. Admittedly, these numbers could be somewhat
useful when comparing lattice schemes among themselves, but do not say much
about the actual security level.
Even if we could all agree on a common way to evaluate lattice primitives
so that they are definitely more than k bits secure, how would we compare
them to other proposals? We would have to ensure that everyone is being
cautious in the same way, which is impossible. To require the submitters to
make a concrete evaluation of their proposals is unlikely to result in much
that is useful, and in fact may do more harm because (as Dan also pointed
out), those numbers may stick and get more clout than they deserve.
My proposed solution would be that the submitters should submit several
(say 4 or 5) parameter sets for their schemes. They can make a guess as to
how much post-quantum security each set (or maybe just one of the sets)
provides, but this will just be a guess and a rough starting point for the
cryptanalytic effort. If the 7 years of cryptanalysis during this
competition makes NIST/crypto community uncomfortable with this initial
guess, then the next "more secure" parameter set from these submitters can
be considered. Similarly, if we think that their guess was an overestimate,
a lower parameter set can then be considered. In addition, the benchmark
should be post-quantum, rather than classical, security.
Furthermore, since we are dealing with public key primitives, it should be
very desirable that there exists some simple procedure for creating more
secure instances (basically, I am saying that the primitives should
families rather than just individual functions). So I think that NIST
should not simply consider the submitted schemes and their parameters, but
should also be placing a heavy emphasis on judging the *design* of the
family of functions, and consider how easy/hard it would be to instantiate
this function with more security.
So the goal of the competition should be to get a scheme/parameters with
128 bits of post-quantum security. Furthermore, there should be an easy
procedure to create more secure instantiations of the public key components
(i.e. we do not need to worry about what happens if SHA gets broken) of
same scheme . I think that this should suffice for the requirements.
Thank you,
-Vadim
> _______________________________________________
> pqc-forum mailing list
> pqc-...@list.nist.gov
> (_internal_name)s
>
1) We intend to consider the security strength categorizations made by the submitters as preliminary estimates. We?re not going to kick out a scheme just because they set their parameters wrong. Depending on how far off the estimate was, and how unanticipated the attack, we may take it as a sign the algorithm isn?t mature enough. But, assessments of an algorithm?s maturity will not be primarily based on security strength categories. Rather, the point of the categories is to compare like with like when doing performance comparisons and to make it easier to plan crypto transitions in the future. We will respond to attacks that contradict the claimed security strength category, but do not bring the maturity of the scheme into question, by bumping the parameter set down to a lower category, and potentially encouraging the submitter to provide a higher security parameter set.
2) We believe that, for most ways of measuring quantum security, it is not entirely clear which will be harder to break in practice
a. a scheme with 184 bits classical security and 128 bits quantum security or
b. a scheme with 192 bits classical security and 96 bits quantum security,
We think that given the uncertain performance characteristics of future quantum computing technology, such edge cases are inevitable.
3) We do not see augmenting a postquantum primitive by using it in a hybrid mode with ECC as an acceptable substitute for increasing the classical security of the postquantum primitive itself. As in our draft call for proposals, we do not plan to ask submitters to provide these sorts of hybrid schemes. Fundamentally, we see hybrid schemes and non-hybrid schemes with additional classical security as providing a hedge against different possibilities. A postquantum scheme paired with ECC hedges against cryptanlysis breakthroughs that might occur before the advent of practical quantum computers. However, even a relatively slow and expensive quantum computer is enough to reduce an attack on the hybrid scheme to an attack on the postquantum part, which will be easy if it doesn?t have enough classical security. This is why the heuristic, that 128, 192, and 256 bits of classical security are enough for security strengths 1,3, and 5, respectively, only holds in the case where there are no relevant quantum attacks, aside from variants of Grover?s algorithm.
Thanks again,
Ray Perlner
From: pqc-forum-bounces at nist.gov [mailto:pqc-forum-bounces at nist.gov] On Behalf Of Vadim Lyubashevsky
Sent: Monday, November 07, 2016 4:56 AM
To: D. J. Bernstein <djb at cr.yp.to>
Cc: pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [Pqc-forum] NIST request for feedback on measurement of security strengths in our upcoming pqc call for proposals.
Dan brings up some very good points regarding the security parameter requirements (both in this e-mail and in the blog entry reference) that I am hoping will be seriously considered by NIST. In particular, I am also concerned about the requirements for the proposals to have parameters for concrete security levels. I think Dan is completely right in saying that the current way of doing it is by either being overly cautious or overly confident. What would be much more beneficial for the community is to have some sort of a precision as to the security level.
When I see a claim that some lattice-based scheme (I am just choosing lattices because that's my area, but I think this probably extends to most other public key schemes as well) has 219 bits of security, I am always reminded of a story my high school chemistry teacher told us when explaining the concept of significant digits:
Two men are walking past two fields of grazing cows, and one asks the other as to how many total cows he thinks are in those two fields. The second man immediately replies "1003". The first man is very surprised and asks "How were you able to come up with such a precise estimate so quickly?" To which the other replies, "well, there are 3 cows on one field, and about 1000 on the other."
The bottom line is that there are so many approximations being done and so many assumptions made and so many lower-order terms being removed (usually all is done in the attacker's favor, so the parameters that one gets are a lower-bound) that it is a little strange to come up with such precise-looking numbers. Admittedly, these numbers could be somewhat useful when comparing lattice schemes among themselves, but do not say much about the actual security level.
Even if we could all agree on a common way to evaluate lattice primitives so that they are definitely more than k bits secure, how would we compare them to other proposals? We would have to ensure that everyone is being cautious in the same way, which is impossible. To require the submitters to make a concrete evaluation of their proposals is unlikely to result in much that is useful, and in fact may do more harm because (as Dan also pointed out), those numbers may stick and get more clout than they deserve.
My proposed solution would be that the submitters should submit several (say 4 or 5) parameter sets for their schemes. They can make a guess as to how much post-quantum security each set (or maybe just one of the sets) provides, but this will just be a guess and a rough starting point for the cryptanalytic effort. If the 7 years of cryptanalysis during this competition makes NIST/crypto community uncomfortable with this initial guess, then the next "more secure" parameter set from these submitters can be considered. Similarly, if we think that their guess was an overestimate, a lower parameter set can then be considered. In addition, the benchmark should be post-quantum, rather than classical, security.
Furthermore, since we are dealing with public key primitives, it should be very desirable that there exists some simple procedure for creating more secure instances (basically, I am saying that the primitives should families rather than just individual functions). So I think that NIST should not simply consider the submitted schemes and their parameters, but should also be placing a heavy emphasis on judging the *design* of the family of functions, and consider how easy/hard it would be to instantiate this function with more security.
So the goal of the competition should be to get a scheme/parameters with 128 bits of post-quantum security. Furthermore, there should be an easy procedure to create more secure instantiations of the public key components (i.e. we do not need to worry about what happens if SHA gets broken) of same scheme . I think that this should suffice for the requirements.
Thank you,
-Vadim
https://blog.cr.yp.to/20161030-pqnist.html
pqc-...@list.nist.gov<mailto:pqc-...@list.nist.gov>
(_internal_name)s
* First build a quantum computer capable of carrying out 2 trillion
_quantum_ operations per second for only 1 watt of power. (To put
this in perspective, a 75-watt NVIDIA GeForce GTX 1050 Ti performs
2 trillion single-precision floating-point operations per second.)
* Then build 2^57 copies of this quantum computer. (To put this in
perspective, about 2^34 ARM chips were sold last year.)
* Then surround the Earth by perfect solar cells collecting all 2^57
watts that the Earth's atmosphere receives from the Sun, and feed
this power to the quantum computers. (To put this in perspective,
the entire human race is currently using 2^44 watts.)
* Then run those quantum computers for the next 30 years.
That's 2^128 quantum operations, at an energy cost of 2^62 watt-years
(where 2^30 watt-years cost roughly a billion dollars at current energy
prices), under quite optimistic assumptions about quantum-computer speed.
Sure, this might happen in the far future. But this is certainly not the
most urgent problem in post-quantum cryptography! Let's figure out first
how we can get 2^128 post-quantum security actually deployed, and then
figure out whether anything more will ever be needed.
There's a strong argument for looking at lower security levels too.
Suppose that many years of further development of quantum engineering
conclude that a quantum operation consumes 2^40 times as much energy as
a pre-quantum operation. Then 2^88 quantum operations won't happen in
the foreseeable future, and it will look rather silly for NIST to have
required every submission to target 2^128 post-quantum security.
Post-quantum proposals vary considerably in scalability. Some proposals
are clearly competitive at 2^88 but non-competitive at 2^128. Why should
those proposals be forced to include parameters targeting 2^128? This is
a distraction from what those proposers should actually be focusing on.
Of course, if it turns out that quantum operations are actually much
less expensive, say 2^20 times pre-quantum, then a proposal breakable in
2^88 quantum operations is like a proposal breakable in 2^108 pre-quantum
operations, which I wouldn't recommend. But in this case there's still
no problem to have a proposal breakable in 2^108 quantum operations.
Here are five examples of reasonable situations to consider:
* quantum op costs 2^10 pre-q ops => 2^118 quantum ops aren't a threat
* quantum op costs 2^20 pre-q ops => 2^108 quantum ops aren't a threat
* quantum op costs 2^30 pre-q ops => 2^98 quantum ops aren't a threat
* quantum op costs 2^40 pre-q ops => 2^88 quantum ops aren't a threat
* quantum op costs 2^50 pre-q ops => 2^78 quantum ops aren't a threat
At this very early stage of quantum computing, we can't reasonably guess
which situation we're in. A proposal that targets 2^90 or 2^100 has a
good chance of being useful, and shouldn't be artificially rejected. Of
course people who go farther and farther below 2^128 are relying on
quantum operations being more and more expensive, but this reliance
could turn out to be justified, and this could end up making a huge
difference in deployability.
I'm not saying that we want to consider arbitrarily low post-quantum
security levels. For example, if we allowed 2^20 post-quantum security
then some of today's standard public-key cryptosystems would qualify!
Saying that those are post-quantum systems is pretty much the same as
quantum denial.
My initial comments suggested setting the lower limit at 2^64. Setting
this [2^64,2^128] range of target post-quantum security levels is enough
to move forward with submissions and evaluations. Proposals established
to be below 2^64 will be thrown away; everything else will be compared
on the same two-dimensional (security,time) graph.
Perlner, Ray (Fed) writes:
> We believe that, for most ways of measuring quantum security, it is not
> entirely clear which will be harder to break in practice
> a. a scheme with 184 bits classical security and 128 bits quantum
> security or
> b. a scheme with 192 bits classical security and 96 bits quantum
> security,
2^184 pre-quantum operations are not going to happen in the foreseeable
future, so the advantage of the second scheme is meaningless in the real
world. See above.
The advantage of the first scheme _could_ matter for users in the real
world in the foreseeable future. If, for example, quantum operations
turn out to be quite cheap, only 2^10 times pre-quantum operations, then
2^96 quantum operations are like 2^106 pre-quantum operations, and this
is "only" a trillion-dollar investment.
Notice that these evaluations of which security levels are meaningful,
or at least potentially meaningful, rely entirely on asking what attacks
might actually be carried out. This is completely different from asking
which security levels are achieved by symmetric crypto.
> We do not see augmenting a postquantum primitive by using it in a
> hybrid mode with ECC as an acceptable substitute for increasing the
> classical security of the postquantum primitive itself.
What is the actual real-world value of aiming for an obviously overkill
2^256 pre-quantum security level?
Does NIST tell RSA users that they're supposed to be using RSA-15000,
and that the presence of RSA-15000 is an important part of NIST's RSA
standards? Seriously? (Even worse, the 15000 would almost certainly need
to be much higher to protect against multi-user attacks, and many of my
earlier comments on DSA uncertainties are also applicable to RSA.)
The failure mode you're referring to here is as follows. Someone aims
for a very low post-quantum security level, say 2^64 (betting that
quantum operations turn out to be expensive); the pre-quantum security
level turns out to be rather close, say 2^80 (so a 2^64 quantum attack
isn't even relevant unless quantum operations are quite cheap); and an
ECC hybrid is also breakable by Shor.
Is this security level adequate? I would say no, and I wouldn't object
to a rule saying that a pre-quantum attack far below 2^128 is considered
a break. But worrying about this low end does _not_ justify requiring
every submitter to aim for meaningless security levels far above 2^128.
Post-quantum proposals vary considerably in the details of how the
pre-quantum security level compares to the post-quantum security level.
It won't be surprising to have tradeoffs such as the following:
* System X: security (2^(1.8b),2^b) for (pre,post)-quantum.
* System Y: security (2^(1.5b),2^b) with better performance.
For example:
* Taking b=128 gives (2^230,2^128) for X and (2^192,2^128) for Y.
* Taking b=112 gives (2^202,2^112) for X and (2^168,2^112) for Y.
* Taking b=96 gives (2^173,2^96) for X and (2^144,2^96) for Y.
Very likely Y is better for the users. The only hope for X to be
competitive at a reasonable security level is for quantum operations to
be quite expensive, considerably more than 2^40 pre-quantum operations.
In the same example, requiring a meaningless (>=2^256,>=2^128) means
that X has to take b around 142 and Y has to take b around 170. This
means disregarding the post-quantum security of both X and Y, quite
likely spoiling the performance advantage of Y, and in the end giving
users a system that's very likely worse. This is a severe mistake.
> We?re not going to kick out a scheme just because they set their parameters wrong.
I'm happy to hear this.
Now suppose an accurate security evaluation says that a post-quantum
proposal targeting 2^96 security actually has 2^92 security, or 2^100
security. What exactly is the benefit of telling submitters that they
are supposed to come up with new parameters?
A careful study of the factorization literature shows that RSA-2048 has
suffered dozens of security losses. Has NIST been responding to each of
those security losses by bumping the 2048 up? Of course not.
Most of the state-of-the-art speeds for cryptography, and many of the
state-of-the-art speeds for attacks, include optimizations for specific
sizes. Trying to generalize to all sizes is often much harder. Changing
the size means losing these optimizations. What's the benefit?
In trying to copy typical security levels from symmetric crypto to
public-key crypto, NIST is ignoring a _vast_ gulf in the maturity of the
security analyses. This copying is also what leads to meaningless and
distracting requests for >=2^256 pre-quantum security.
> Rather, the point of the categories is to compare like with like when
> doing performance comparisons
Lumping a submission with 2^92 security together with a submission with
2^80 security in a 2^80 "category" is not comparing like with like. The
right thing to do is plot (security,time) pairs on a two-dimensional
graph.
> and to make it easier to plan crypto transitions in the future.
I don't understand this argument. If people plan to transition from,
say, RSA-2048 to 256-bit ECC, how does the change in security levels
make the plan more difficult?
---Dan
Again, the point of the security strength categories is not to "artificially reject" algorithms or parameters. The point of the categories is to impose some order on the range of security strengths we're interested in, since existing ways of measuring quantum security in the literature are highly variable, and often inconsistent with one another and with what we'd expect to be the real world cost of quantum and classical attacks. Additionally, we'd like submitters to be able to provide overkill parameters without fear that they need to compete on performance with other submitters who've set their parameters less conservatively. Conversely we'd like submitters to be able to submit parameters that are more optimized for performance, as long as we can have a reasonable expectation that those parameters will still be secure for a significant period of time. Without the categories we would worry that submitters would be left guessing whether they should be targeting the low or hi!
gh end of the range when selecting their parameters.
We agree that most of our security strength categories are probably overkill. In particular, we'd expect security strengths 2 on up to be secure for 50 years or more, and we wouldn't be terribly surprised if security strength 1 lasted that long as well. That said, we've seen significant demand for higher security strengths, and such a demand may be reasonable. For example, someone may require their data to remain secret for 30+ years, may not be able to implement a standard coming out of our process for 15 years, and may not want to replace their crypto for another 15 years. It is because this demand exists that we suggest (not require) that submitters offer something that will meet this demand.
As for the lower security strength categories, we absolutely are willing to standardize parameters in those categories, as long as we think those categories remain secure. If we didn't think, for example, that security strength 1 was secure, we would signal this by withdrawing AES 128. We have not done this, nor do we have any current plan to do so. Until that changes, submitters of an algorithm selected for standardization should expect us to standardize parameters in all security strength categories where parameters are provided.
Thanks,
Ray Perlner
-----Original Message-----
From: pqc-forum-bounces at nist.gov [mailto:pqc-forum-bounces at nist.gov] On Behalf Of D. J. Bernstein
Sent: Sunday, November 20, 2016 12:14 AM
To: pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [Pqc-forum] NIST request for feedback on measurement of security strengths in our upcoming pqc call for proposals.
For example:
---Dan
_______________________________________________
pqc-forum mailing list
pqc-...@list.nist.gov
(_internal_name)s
Thanks for your comments! I am glad we are having this discussion. I agree that there are a lot of potential issues in how to measure post-quantum security, and I think it's best if we as a community come to grips with these issues as early as possible.
I wanted to add a bit to what Ray has written:
1. First, it is true that our "security strength categories" will not be a perfect fit for some PQC schemes. However, this need not be an obstacle to making sensible decisions about what to standardize. These categories are a starting point for our analysis, not the final result. We are trying to take an approach that is agnostic with respect to the detailed properties of the various PQC schemes. But as we proceed, we will obviously look at those detailed properties.
2. In your example, you are correct that "system Y" does not fit very well in category 5 (256 bits of classical security, 128 bits of quantum security). However, there is an easy way to avoid this awkward situation, because "system Y" does fit nicely in category 4 (192 bits of classical security, 128 bits of quantum security). So we would recommend that "system Y" be submitted in category 4, and there would be no need to submit it in category 5. Our tentative plan is to consider both categories 4 and 5 for high security applications.
(This is consistent with Ray's email from 10/28/16: "We also wish to clarify that we do not, and did not intend to require that submitters provide different parameter sets for all 5 security levels.... Our current suggestion is that submitters provide at least one parameter set meeting or exceeding security strength 4 or 5.")
3. In general, we have tried to provide enough different security categories so that for every PQC scheme, there will be a few categories where it can fit reasonably well. We'd definitely like to get more feedback about whether this works well for people, and whether there are improvements that we could make?
For instance, here are a few questions that have come up in our internal discussions:
- Should we create a new security category that is based on a 160-bit block cipher (which would sit between categories 1 and 3, which are based on AES-128 and AES-196)? This would allow a more gradual ramp-up of the security strength, instead of having a big jump from 128 to 192.
- Should we create a new security category that is based on a 320-bit hash function (which would sit between categories 2 and 4, which are based on SHA3-256 and SHA3-384)? Again, this would allow a more gradual ramp-up of the security strength.
- Should we get rid of category 5, which is based on AES-256? Requiring such a high level of security might be over-emphasizing paranoia at the expense of efficiency.
4. One might be a bit wary of analyses of "what happens if quantum operations are {2^10, 2^20, 2^30, 2^40} times more expensive than classical operations," because there's always a possibility that some cryptanalyst will find a way to amortize the cost of quantum operations, by combining a small-and-expensive quantum attack with a large amount of cheap classical computation.
For instance, what if someone comes up with a quantum attack that uses 2^80 classical operations and 2^40 quantum operations? And what if the classical and quantum parts of the attack are separate, so that the classical operations can be done on a classical computer? Then this quantum attack will be cheap -- it will only cost as much as a 2^80 classical attack.
(Note: One might argue that this is a contrived example, because there might be some way of converting this hybrid quantum-classical attack into a completely-classical attack. Certainly this would be true if those 2^40 quantum operations were being used to run Grover's algorithm -- then one could simply replace them with 2^80 classical operations. But this might not be the case. If those 2^40 quantum operations were being used to run Kuperberg's algorithm for dihedral HSP, which achieves a superpolynomial quantum speedup, then the best classical attack could still be well over 2^128 operations.)
For these reasons, it seems difficult to say exactly how many "quantum operations" we can expect an adversary to perform.
5. Just to repeat what I said earlier: we really appreciate getting your feedback on these issues, it makes us aware of possible pitfalls in our approach to PQC standards. Thanks very much!
Cheers,
--Yi-Kai
________________________________________
From: pqc-forum-bounces at nist.gov <pqc-forum-bounces at nist.gov> on behalf of Perlner, Ray (Fed) <ray.perlner at nist.gov>
Sent: Tuesday, November 22, 2016 11:27 AM
To: pqc-forum
Suppose submission H is an efficient submission for stateless hash-based
signatures tuned for an ample 2^128 post-quantum security level, on the
basis of extensive security analysis.
Should submission H be punished because some painfully slow submission
targets "overkill" security levels to compensate for a lack of security
analysis, and then argues that H is "less conservative" since it targets
"merely" 2^128 post-quantum security?
Should the hash submitters have to defend themselves against this by
wasting time working out details of larger parameters---for example,
specifying and analyzing a fast short-input >256-bit hash function, and
maybe >512-bit and >1024-bit since "overkill" is inherently unlimited?
Part of the problem here is that people will be forced to waste time
specifying and analyzing insanely high security levels. Another part of
the problem is that everyone will end up being flooded with performance
comparisons at those security levels---which are poorly correlated with
performance comparisons at reasonable security levels.
> someone may require their data to remain secret for 30+ years, may not
> be able to implement a standard coming out of our process for 15
> years, and may not want to replace their crypto for another 15 years.
Yes, we _might_ need more than 2^128 post-quantum security to survive an
attack 60 years from now, depending on how quantum-computer technology
evolves. (I'm talking about actual security levels here, not estimated
security levels; there's a separate question of how much security margin
designers want to leave as protection against improvements in attacks.)
None of NIST's security categories tackle this >2^128 far-future
scenario. The top category allows submissions that are breakable in
2^128 quantum operations, just like AES-256.
This is fine with me. I think that trying to target post-quantum
security levels above 2^128 at this point will be a severe distraction
from much more urgent things. I don't think that NIST should reconsider
its choice of 2^128 as the maximum targeted post-quantum security level.
What's really bothering me is NIST's accompanying requirement of 2^256
pre-quantum security. This is insanely far from the current reality:
* The human race consumes 2^44 watts.
* State-of-the-art pre-quantum computer technology does 2^60
operations per watt-year.
Sure, 2^60 is an improvement compared to 2^35 operations per watt-year
from the Cray-1, but the Cray-1 was 40 years ago; that's 1.6 years per
doubling. The more recent trends are more like 2 years per doubling.
Industry observers consistently predict further slowdowns.
To imagine the human race carrying out an attack that takes, say, 2^192
pre-quantum operations, you have to imagine something like
* 2^57 watts being devoted to cryptanalysis for 30 years,
* performing an incredible 2^130 operations per watt-year.
Leaping from 2^60 pre-quantum operations per watt-year to 2^130
pre-quantum operations per watt-year in the next 60 years (never mind
the fact that the attack is then _starting_ at year 60) would have to
mean _less than a year_ for an average doubling---an astounding
acceleration in the progress of pre-quantum computer technology. What's
even more astounding, in the context of a post-quantum effort, is the
notion that the human race
* would acquire this ability to do 2^192 pre-quantum operations but
* _wouldn't_ at the same time acquire the ability to do >2^128
quantum operations.
That's what would be necessary to justify requiring >2^192 pre-quantum
security while tolerating merely 2^128 post-quantum security.
Is this combination of events imaginable? Maybe. Is it going to happen?
Extremely unlikely. Is it an important enough possibility to justify
taking attention away from core post-quantum targets? Certainly not.
Requiring 2^256 pre-quantum security is going to end up making worse
algorithms look better, in part because the pre-quantum-post-quantum gap
varies (see the "System X" vs. "System Y" example in my previous email),
and in part because scalability varies (as in the 2^88 vs. 2^128 example
in my previous email).
The same serious problem applies to category #5 (2^128 post, 2^256 pre),
category #4 (2^128 post, 2^192 pre), and category #3 (2^96 post, 2^192
pre). Category #1 and #2 are different (as long as they aren't taken as
sharp edges): as I said before, I wouldn't object to a rule saying that
a pre-quantum attack far below 2^128 is considered a break.
> It is because this demand exists that we suggest (not require) that
> submitters offer something that will meet this demand.
Even _hinting_ that something will be part of the evaluation will end up
forcing submitters to obey---and what NIST is doing is clearly much more
than a hint. We'll then have all the problems of people being distracted
by irrelevant targets and of better submissions being made to look worse
on the basis of those targets.
> Without the categories we would worry that submitters would be left
> guessing whether they should be targeting the low or high end of the
> range when selecting their parameters.
I don't see how copying a spread of symmetric security levels makes the
goals clear for submitters. Submitters will end up being forced to take
a shotgun approach, trying to hit every named target and a huge cloud of
additional targets (don't want to be beaten in the "overkill" game!),
rather than focusing on what's most important.
---Dan
Should the security analysis consider alternate engines such as biomath?
THIS MESSAGE IS FOR THE USE OF THE INTENDED RECIPIENT(S) ONLY AND MAY CONTAIN INFORMATION THAT IS PRIVILEGED, PROPRIETARY, CONFIDENTIAL, AND/OR EXEMPT FROM DISCLOSURE UNDER ANY RELEVANT PRIVACY LEGISLATION. No rights to any privilege have been waived. If you are not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying, conversion to hard copy, taking of action in reliance on or other use of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, please notify me by return e-mail and delete or destroy all copies of this message.
- I think the right way forward is to clearly separate quantum security
from classical security. Just as classical security levels are built on
the cost of classical computers, the clock rate of these computers, and
the parallelization of classical algorithms, the security levels of
quantum computers should be based on the cost of quantum computers, the
clock rate of these computers, and the parallelization of quantum
algorithms. If possible, the security levels should be chosen so that the
cost for an quantum attacker is equal to the cost for an classical
attacker.
- Very much agreeing with what Dan is saying. I think a [2^64, 2^128]
range for quantum security levels seems very reasonable. We should not be
requiring at 128-bit quantum security just because 128-bit security is the
current classical security level. Asymmetric PQC public keys and
signatures will likely be heavy for many IoT radio technologies even with
64-bit quantum security.
- In addition to asymmetric algorithms, I think that NIST should as early
as possible provide some more clarity regarding the status of AES-128 and
SHA-256. The last two weeks there have been discussions regarding this in
both IETF and 3GPP. If NIST believes AES-128 is secure for the foreseeable
future, it would be very good if you stated that. If you do not believe
AES-128 is secure for the foreseeable future, it would be very good if you
stated that as well.
NIST SP 800-57 states that that AES-128 and SHA-256 can be used beyond
2030 (without specifying an end date). I have currently not seen any
strong reasons to change that because of quantum computers.
The current NIST estimate is ?it is likely that a quantum computer capable
of breaking RSA-2048 in a matter of hours could be built by 2030 for a
budget of about a billion dollars?. The current estimate
(https://arxiv.org/pdf/1512.04965v1.pdf) is that attacking AES-128
requires 2^86.3 quantum operations.
- As breaking RSA-2048 requires 72 * 2048^3 = 2^39.2 quantum operations, a
single such quantum computer would require a matter of 2^86.3 / 2^39.2 /
24 / 365 = 17 billion years to break AES-128.
- Such a quantum computer could do 2^39.2 * 24 * 365 * 10 = 2^55.6
operations in a matter of decades. And as a cluster of N quantum computers
gives a speedup of approximately N^(1/2) it would require around (2^86.3 /
2^55.6)^2 = 3 * 10^18 such quantum computers to break AES-128 in a matter
of decades. The cost of such a cluster in 2030 would be around 3 * 10^27
dollars or 3 octillion dollar.
/John
Cheers,
Brent
-----Original Message-----
From: John Mattsson [mailto:john.mattsson at ericsson.com]
Sent: Wednesday, November 23, 2016 3:37 PM
To: Brent Kimberley <Brent.Kimberley at Durham.ca>; 'D. J. Bernstein' <djb at cr.yp.to>; pqc-...@list.nist.gov
Subject: Re: [Pqc-forum] NIST request for feedback on measurement of security strengths in our upcoming pqc call for proposals.
/John
The performance consequences of these security targets are likely to be
even larger than the performance consequences of NIST's ill-considered
request for 2^512 preimage security for SHA-3. The resulting _security_
risks are vastly larger: applications that need good performance will
see years of unnecessary delay of deployment of post-quantum crypto.
Liu, Yi-Kai (Fed) writes:
> Requiring such a high level of security might be over-emphasizing
> paranoia at the expense of efficiency.
It's over-emphasizing a very strange type of paranoia in which
* the attackers can't do 2^128 quantum operations; but
* quantum computers do work and the attackers _can_ do quantum
operations very quickly, so a significantly lower security level
will be broken; while
* pre-quantum operations are something like 18446744073709551616
times more efficient than quantum operations, justifying 2^192
pre-quantum security.
Meanwhile it's ignoring a much more reasonable fear that an inefficient
standard will not be deployed. Merely adding lower-security parameters
to the standard doesn't adequately address this: there's no reason to
believe that a standard selected for an absurdly high pre-quantum
security goal will provide the best performance for a reasonable
post-quantum security goal.
> it is true that our "security strength categories" will not be a
> perfect fit for some PQC schemes.
The pre-quantum requirements in categories #3, #4, and #5 will severely
corrupt the evaluation of hash-based crypto, code-based crypto, and
lattice-based crypto, in the same way that they would have corrupted the
evaluation of RSA, DSA, etc. This is not a corner issue.
The question isn't whether one can imagine examples that _won't_ be
corrupted (e.g., ordinary-isogeny-based crypto is probably undamaged).
The question is whether we're going to stay focused on the job at hand,
namely minimizing the damage likely to be caused by quantum computers.
> We are trying to take an approach that is agnostic with respect to the
> detailed properties of the various PQC schemes.
Really? The categories are all quite explicitly modeled after particular
post-quantum schemes---various symmetric schemes. This has the effect of
punishing post-quantum public-key schemes that have smaller gaps between
pre-quantum and post-quantum security levels, and adding more punishment
to such schemes if they don't scale nicely to overkill security levels.
There's a severe quantitative error in the modeling---for example,
category #5 requires 2^256 pre-quantum _multi-user_ security, which
AES-256 does _not_ provide---but this error doesn't hide the intent.
> we have tried to provide enough different security categories so that
> for every PQC scheme, there will be a few categories where it can fit
> reasonably well.
All of the categories have pre-quantum/post-quantum security-level ratio
1.5 or 2. Both of these numbers are very poor matches to, e.g.,
* ratio 1 for collision search,
* ratio asymptotically ~1.1 for SVP,
* ratio asymptotically ~1.2 for subset sum,
and many other fundamental attack algorithms in public-key crypto (by
the best algorithms known). For code-based crypto the ratio is
asymptotically 2, but preliminary estimates suggest that the ratio is
much smaller for reasonable security levels. Furthermore, multi-user
attacks seem to move the ratio closer to 1, and of course any overheads
of quantum computation move the ratio even closer to 1.
> - Should we create a new security category that is based on a 160-bit
> block cipher (which would sit between categories 1 and 3, which are
> based on AES-128 and AES-196)?
This question is superseded by the recognition that security estimates
will be significantly revised anyway. See my previous comments regarding
RSA-2048, optimizations for specific sizes, etc.
Instead of trying to figure out precise (pre,post) possibilities in
advance, why not let the (pre,post) levels be determined by analysis of
the submissions? This guarantees the best possible fit, allows better
comparisons, and avoids wasting submitter time on constant parameter
tweaks. Of course you'll want to specify a minimum target below which
submissions are considered broken, and you'll also want to specify a
maximum target (since the alternative is a pointless arms race).
> One might be a bit wary of analyses of "what happens if quantum
> operations are {2^10, 2^20, 2^30, 2^40} times more expensive than
> classical operations,"
[ ... ]
> what if someone comes up with a quantum attack that uses 2^80
> classical operations and 2^40 quantum operations?
That's a quantum attack whose overall cost looks like 2^80 + 2^40 Q,
which one can reasonably guess is close to 2^80.
The imbalance of numbers makes this type of evaluation seem trivial.
However, sophisticated quantum algorithms normally have a mix of
operations, and the relative costs of the operations are extremely
unclear, as mentioned in my original comments to NIST:
At this point we have only crude guesses as to the ultimate costs of
different quantum operations. I understand that NIST wants to define
2^b post-quantum security as 2^b quantum AES computations, but what
is the relative cost of a quantum AES computation and a lookup in an
N-entry table using a quantum index? How does this depend on N? How
much harder is it if the table entries are themselves quantum?
Eventually we'll have a better idea of the amount of energy used by each
operation, and specifically the amount of power at each moment in time.
Simple addition will then tell us the amount of power used by an entire
computation at each moment in time. We'll then compare this to sensible
real-world predictions of the power and time available to the attacker.
This perspective provides a principled way to put all operations---
quantum or not---onto a unified scale. There are other limits on attacks
but power and time are of course quite fundamental.
---Dan
I think the security model needs to exceed the computational capacity of a nanomole of dna.
Assuming a mole of DNA can yield a computational capacity in excess of 2^60 (at 650g / mole), can we not assume that a 700k ton DNA facility yields a computational capacity in excess of 2^90?
>Instead of trying to figure out precise (pre,post) possibilities in
>advance, why not let the (pre,post) levels be determined by analysis of
>the submissions? This guarantees the best possible fit, allows better
>comparisons, and avoids wasting submitter time on constant parameter
>tweaks. Of course you'll want to specify a minimum target below which
>submissions are considered broken, and you'll also want to specify a
>maximum target (since the alternative is a pointless arms race).
I agree, the current categories are somewhat artificial and five
categories are likely too many anyway. AES has three levels (128, 192,
256) and the 192 level is not used at all. Also post-quantum, the world
will likely end up using two major security levels, one for ordinary use
and one for TOP SECRET. I think the reality is that neither NIST not the
cryptographic community have a clear view on what the quantum security
levels should be or for that matter what the ratio between classical
security and quantum security should be. NIST should consider keeping the
criteria more open at this point instead of forcing contributions with
quantum security levels and ratios that might not be optimal. The only
disadvantage I see with open ranges (e.g. 128-256 classical, 64-128
quantum) is that direct comparison between algorithms might be harder, but
based on Dan's comments, it seems like the comparison will not be fair
with the currently suggested criteria.
>I'm sorry for re-hashing old topics.
>
>I think the security model needs to exceed the computational capacity of
>a nanomole of dna.
I assume you mean gigamole.
>Assuming a mole of DNA can yield a computational capacity in excess of
>2^60 (at 650g / mole), can we not assume that a 700k ton DNA facility
>yields a computational capacity in excess of 2^90?
Correct me if I am wrong, but wouldn?t a DNA computer affect the classical
security level rather than the quantum security level? My understanding is
that DNA computers are capable of doing massive parallel brute force, but
would not be able to run any quantum algorithms.
A DNA computer with a capacity to brute force 2^90 keys would only have a
chance of 2^-38 to break AES-128 on each run. I do not now how much such
a DNA computer would cost, but it does not seem obvious that it is more
cost efficient than using a large amount of dedicated classical hardware.
I chose a 700k tonne facility assuming a facility the size of Newtown wpcp new york. Ie in the range of 2B.
Original Message
From: John Mattsson
Sent: Monday, November 28, 2016 6:26 AM
To: 'pqc-...@list.nist.gov'
Subject: Re: [Pqc-forum] NIST request for feedback on measurement of security strengths in our upcoming pqc call for proposals.
_______________________________________________
pqc-forum mailing list
pqc-...@list.nist.gov
(_internal_name)s