Christopher J Peikert writes:
> The distinction is obvious and self explanatory: for the Rainbow NISTPQC
> submission, we now have a conclusive demonstration that its proposed
> parameters fall far short of their targeted NIST security levels.
True for rainbow1a, but false for rainbow3c and rainbow5c. Furthermore,
asking whether attacks have been _conclusively demonstrated_ is clearly
setting the bar too low for issuing warnings.
I already drew this distinction ("The issue isn't just the demonstrated
attack on rainbow1a: the new paper says 2^131 operations for rainbow3c
and 2^164 operations for rainbow5c, where the round-1 submission in 2017
says 2^217.4 and 2^275.4 respectively") and advocated warnings based on
the _drop_ of security levels.
Nobody is disputing the need for warnings regarding the demonstrated
break. The question is what to do beyond that: in particular, how to
warn people about security degradation.
The official NISTPQC evaluation criteria express concern about "schemes
that were designed by repeatedly patching older schemes that were shown
vulnerable to cryptanalysis". What we're seeing with Rainbow is a patch,
an increase of key sizes, in response to cryptanalysis. But we've also
seen lattices increasing key sizes in response to cryptanalysis, as
illustrated by the jump from dimension 256 to dimensions 512/640 (in
both cases for matching AES-128), a jump big enough to be clearly
visible to everybody despite neverending lattice-security obfuscation.
Surely NIST should be consistently issuing warnings regarding all of
these security losses. I'm concerned by the selectivity of warnings in
NIST's earlier reports, the same way that I'm concerned by QKD-vs.-PKC
reports that highlight PKC failures and conceal QKD failures.
> For the remaining lattice submissions, all known attacks -- and even
> optimistic extrapolations thereof -- are quite far from endangering the
> targeted security levels.
False. For example, the round-3 Kyber submission said that known attacks
against the newly proposed round-3 Kyber-512 could use just 2^135.5
"gates", i.e., 2^8 times fewer "gates" than AES-128 key search. There
have been further attacks since then: e.g., an attack at Asiacrypt 2021
that appears to further reduce the security of round-3 Kyber-512.
The Kyber submission argues that the costs of RAM ("the massive memory
requirements that are ignored in the gate-count metric") provide a
security margin---but surely such arguments should be applied
consistently across submissions!
Furthermore, NIST wrote that "failing to meet category 1 would result in
us discarding a parameter set, while failing to meet a higher level
could simply mean changing how it is labeled in our standard." So the
question of whether Kyber-512 reaches "category 1" is important for
whether Kyber-512 is kept at all, whereas the question of whether
rainbow5c reaches "category 5" could be a mere matter of labeling.
The real reason for issuing a warning about rainbow5c is not the
comparison to AES-256, but rather the drop of estimated costs of known
attacks, from 2^275.4 operations in 2017 to 2^164 operations in 2022.
It's not that 2^164 operations will happen in the foreseeable future;
it's that one has to worry that this isn't the end of the story. But,
for the same reason, surely there also have to be warnings about the big
advances in lattice attacks that have forced dimension-256 lattice
recommendations in 2010 to be replaced by much higher dimensions today.
> Moreover, even for the cited pre-NISTPQC lattice parameters, it's not clear
> whether a "serious AT analysis" -- the metric endorsed in the quoted
> message and its author's research -- of known attacks would actually imply
> sub-AES-128 security, when accounting for large memory size/accesses and
> other real costs.
Structurally, are you seriously arguing that the uncertainties in the
literature regarding the costs of lattice attacks are a reason to _not_
issue warnings regarding advances in lattice attacks, such as warnings
that the 2010 Lindner--Peikert "theoretically sound" dimension-256
lattice parameters are broken and should not be used?
Quantitatively, the best estimates available are that the real costs of
sieving (never mind the latest enumeration advances) are at roughly 30%
higher security levels than a naive free-RAM analysis would suggest.
This certainly isn't enough to rescue that dimension-256 proposal.
> Compared to LP'11, FrodoKEM and Kyber aim to give *lower bounds* on
> security that are a good deal lower than the *upper bounds* implied by
> known attacks.
This is partially correct (see below), but not meaningful as stated,
since it would be satisfied by giving a lower bound of, e.g., 0. What's
missing here is a requirement for the "lower bound" to be as secure as
AES-128, the minimum allowed security level.
Does Kyber have a lower-bound analysis meeting this requirement? Let's
look at the submission.
The original Kyber submission claimed repeatedly that its analysis was
"conservative". But it then reported that the result of the analysis for
Kyber-512 was only 2^112 (see Table 3)---which, oops, is much smaller
The submission then (see bottom of page 17) stated various unquantified
arguments for the claim that Kyber-512 was as hard to break as AES-128.
This AES-128 conclusion is not the "conservative" analysis and was never
claimed to be "conservative"---even though it's easy to see how readers
can be misled into thinking otherwise.
Furthermore, a careful look at the supposedly "conservative" analysis
(see Section 6 of the round-2 NTRU Prime submission) shows that the
analysis combines a series of overestimates, underestimates, potential
overestimates, and potential underestimates. Selectively pointing to the
underestimates does _not_ justify the claim that this is a "lower
In fact, all available evidence is that the output of the "conservative"
analysis is _above_ the actual attack costs for all sufficiently large
parameters. See my pqc-forum email dated 30 May 2020 02:15:31 +0200.
(The issue, in a nutshell, is that "dimensions for free" appears to be a
superpolynomial speedup while all of the slowdowns appear to be
The round-3 Kyber submission tried to quantify various issues, leaving a
30-bit range of uncertainty for the number of "gates" in known attacks
against round-3 Kyber-512 (while round-2 Kyber-512 has been abandoned).
As noted above, the low end is hundreds of times fewer "gates" than
AES-128 key search.
> - They ignore the high cost of the huge memory needed for sieving in
> large dimensions.
False. The "conservative" analysis ignores this, but the Kyber
submission doesn't. For example, as noted above, the current Kyber
submission points to "the massive memory requirements that are ignored
in the gate-count metric" as a reason to think that it's okay to be
breakable by an attack using hundreds of times fewer "gates" than
AES-128 key search.
Page 17 of the original Kyber submission similarly pointed to "the cost
of access into exponentially large memory" as part of its unquantified
argument that round-1 Kyber-512 was as hard to break as AES-128.
> - They focus on the "Core-SVP" cost of solving a single SVP in a large
> enough dimension, ignoring the cost of multiple SVP calls and
> the surrounding lattice reduction.
False. This is something else that the "conservative" analysis ignores
but that the Kyber submission has never ignored.
For example, page 17 of the original Kyber submission pointed to "the
(polynomial) number of calls to the SVP oracle" as another part of its
unquantified argument that round-1 Kyber-512 was as hard to break as
AES-128. The latest Kyber submission tries to quantify this, and in
particular claims (top of page 27) that this adds 11-12 bits of
security, but then says (page 29) that it expects to lose 2-8 bits from
this because of known speedups.
> - They ignore the bit-operation costs of working with real numbers, etc.
False. For example, page 17 of the original Kyber submission pointed to
"the gate count required for one 'operation'" as yet another part of its
unquantified argument that round-1 Kyber-512 was as hard to break as
AES-128. Again the latest Kyber submission tries to quantify this and
leaves considerable uncertainties.
Of course, the 2^128 operations to search all AES-128 keys also involve
many bit operations per key. This is why NIST set the minimum at 2^143
"gates". This is also why the original Kyber submission tried to argue
that it would gain a factor above 2^30 compared to its 2^112, rather
than just a factor 2^16.
> Naturally, this will yield larger parameters to get above a desired
> security lower bound. But it also gives more room for plausible future
> improvements in cryptanalysis, like time-memory tradeoffs, amortizing SVP
> calls, etc.
This is misinformation regarding the Kyber submission.
> (Whether it's "good" to use these attacker-friendly metrics is beside the
> point; the point is the big difference in methodology between LP'11 and
> FrodoKEM/Kyber, making their security bounds incomparable.)
To be clear, are you continuing to propose the "theoretically sound"
dimension-256 lattice systems from 2010 Lindner--Peikert as matching
AES-128 security (even better, requiring "2^120 seconds" to break on a
2.3GHz core), despite all the subsequent advances in lattice attacks?
---D. J. Bernstein