Daniel Apon writes:
> lattice sieving historically involves high memory costs
The best estimates available here aren't comforting for Kyber.
https://ntruprime.cr.yp.to/nist/ntruprime-20201007.pdf, using energy
numbers published by Intel, estimates each access to a bit within N bits
of memory as matching the cost of N^0.5/2^5 bit operations (page 57),
and uses this to estimate how RAM costs affect concrete sieving costs.
The numerical examples on page 103 show the extra security ranging from
a 2^40 factor for Core-SVP 2^129 to a 2^90 factor for Core-SVP 2^271.
See Section 6 of the same document for many reasons that these numbers
can be underestimating or overestimating the actual attack costs. The
round-3 Kyber documentation estimates that Kyber-512 attacks cost
between 2^135.5 and 2^165.5 "gates", where the "floor" for NIST's lowest
security category is 2^143 "gates".
The new attack paper starts from the Kyber documentation's middle
estimate, namely 2^151.5, and says it's reducing the attack costs by a
factor 2^14, to 2^137.5, using better attack algorithms.
It's clear that at least some of the algorithm-analysis uncertainties
stated in the round-3 Kyber documentation are regarding speedups that
combine with the speedups in the new paper. The optimistic possibility
for the attacker is that the new paper is actually giving a 2^14 savings
from 2^135.5, i.e., 2^121.5 bit operations.
Does accounting for real RAM costs close the gap between 2^121.5 and
2^143? One might think that, sure, this is covered by the 2^40 mentioned
above: Kyber-512 previously had security 2^40*2^135.5 = 2^175.5, so a
32.5-bit security margin, and the new paper is reducing this to an
18.5-bit security margin: i.e., the new paper is merely cutting out 40%
of the Kyber security margin, rather than breaking Kyber outright.
But let's look more closely at the numbers. As a preliminary point,
round-3 Kyber-512 is starting from Core-SVP just 2^112 and
revised-Core-SVP just 2^118, with exponent 87% and 91% of 129
respectively, so the obvious estimate is about 2^36 instead of 2^40.
Furthermore, this 2^36 is accounting for the energy cost of accesses to
a giant RAM array, while it's clear that many of the bits of security
beyond Core-SVP claimed in the round-3 Kyber security analysis are
coming from accounting for the cost of local bit operations. These
effects don't multiply; they add!
Internally, Core-SVP is starting from estimates of the number of
"operations" inside sieving. It makes sense to say that the attacker
needs to pay for the large-scale memory access inside each "operation".
It also makes sense to say that the attacker needs to pay for all the
bit operations inside each "operation". But the local bit operations are
an asymptotically irrelevant extra cost on top of the memory access, and
the best bet is that they don't make much difference for Kyber-512. The
real cost of this type of algorithm is, at a large scale, driven
primarily by data motion, not by local computation.
(The new paper seems to have some local speedups to the sieving inner
loop, which similarly should be presumed to make little difference next
to the memory-access bottleneck, but my understanding is that this is
under half of the bits of security loss that the paper is reporting.)
So I don't see how current knowledge can justify suggesting that the
costs of RAM rescue Kyber-512 from the new attack. It seems entirely
possible that the real costs of this Kyber-512 attack are considerably
below the costs of a brute-force AES-128 attack. Deciding this one way
or the other will require much more serious analysis of attack costs.
Certainly it's disturbing to see Kyber-512 dropping from (1) supposedly
"conservative" to (2) bleeding-edge in bit operations and then to (3)
apparently broken in bit operations and bleeding-edge in real cost. The
Kyber documentation really should stop using the word "conservative".
It's also deeply concerning that the uncertainties in evaluating costs
of lattice attacks, such as the 2^30 uncertainty factor (2^135.5 through
2^165.5) in the round-3 Kyber submission, have been weaponized again and
again to suggest that we shouldn't worry about a paper speeding up
lattice attacks by a factor 2^5 or 2^10 or 2^15. The cumulative effect
of years of such speedups has clearly been far more than 2^30. (The
mascot for lattice-based cryptography should be a slow-boiled frog.)
As a historical matter, we've seen again and again in cryptography that
a series of public attack advances has culminated in a feasible attack.
The community recognizes and promotes progress by putting serious effort
into _quantifying_ the attack costs. Security evaluation is obviously
the most important input to NISTPQC, so the NISTPQC rules should have
been designed to prioritize and assist quantification of attack costs.
Back in 2016, NIST proposed NISTPQC evaluation rules that instead
prioritized fake confidence in staying above cutoffs such as AES-128 and
AES-192. I correctly predicted in
https://blog.cr.yp.to/20161030-pqnist.html
that "Quantitatively comparing post-quantum public-key security levels
is going to be a nightmare". I recommended throwing away the security
cutoffs and replacing them with the traditional focus on analyzing
algorithm costs as accurately as possible. NIST's subsequent arguments
for prioritizing cutoffs don't stand up to examination---for details see
Appendix B.5 of
https://ntruprime.cr.yp.to/latticerisks-20211031.pdf
---and, even worse, the cutoffs are in cost metrics that NIST _pretends_
to have defined but has never actually defined. (See below.)
Going forward, clearly NIST is going to include some lattice systems in
its first standards; supposedly we'll find out which ones any moment
now. Maybe NIST is going to recklessly include the smallest proposed
parameters---but apparently we won't find this out for a while; NIST
indicated, surprisingly, that it _isn't_ planning to name parameters
yet. Given how much supposed security lattices have lost over the years
and how large the remaining attack surface is, there's a worrisome level
of risk even for bigger parameters such as Kyber-1024. So there's an
ongoing need for clear quantification of the costs of lattice attacks.
> the NIST PQC call for proposals defined a version of the (quantum)
> circuit model involving a MAXDEPTH parameter for gate-operations in series.
False. NIST described some desiderata for a model, such as MAXDEPTH, but
never defined a model to be used for NISTPQC. In particular, NIST never
defined the set of allowed "gates", despite requests for clarification.
(The algorithms literature includes many different gate sets, often
giving wildly different algorithm costs; see, e.g., how Ambainis's
distinctness paper uses quantum RAM gates.) Section 5.4 of
https://cr.yp.to/papers/categories-20200918.pdf
gives quotes, references, and numerical examples to illustrate the lack
of definition.
We've seen repeatedly how the ambiguities in NIST's pseudo-definitions
have been exploited to downplay attacks against some systems---this
reached amazing heights with last month's excuses for not withdrawing
the claim that the dimension-256 parameters in the preliminary Frodo
design from Lindner--Peikert "appear to be at least as secure as
AES-128"---and at the same time to hype attacks against other systems.
NIST's failure to pick a metric has done far more damage to comparisons
than whatever damage would have been done from the selection of a metric
that turns out to not be perfectly realistic.
---D. J. Bernstein