Within the general risks of structured lattices, there are specific
cryptanalytic reasons for concern regarding the security impact of
Kyber's leap to higher-rank modules over smaller rings: for example,
kyber768 relying on rank-3 modules over a ring of dimension only 256.
It's entirely possible that this structure is exploitable. For example,
compared to the original NTRU system using the 719th cyclotomic field,
kyber768 could end up _much weaker_, as a direct result of Kyber's
module structure.
There's advertising to the contrary, trying to make people believe that
the extra risks of Kyber have been proven not to exist. This advertising
is fundamentally flawed. The proofs _do not_ say what they're portrayed
as saying. There's no hope of fixing this.
Let's go carefully through the details.
1. Special field structures
\Q[X]/(X^1024+1), the 2048th cyclotomic field, has a big pile of
nontrivial subfields: three subfields of half degree, three subfields of
quarter degree, etc.
For example, one subfield is \Q[X^4]/(X^1024+1), which is isomorphic to
\Q[X]/(X^256+1). In this message I'll apply such isomorphisms implicitly
and talk about \Q[X]/(X^256+1) as a subfield of \Q[X]/(X^1024+1).
Such a big pile of subfields is unusual _even among cyclotomic fields_.
There are far fewer subfields in, e.g., the Nth cyclotomic field where N
and (N-1)/2 are prime, exactly the fields recommended in the original
NTRU paper for implementation reasons.
2. Security risks of special field structures
Do unnecessary subfields damage security? From a cryptanalytic
perspective, there are clear reasons to worry that the answer is yes.
There's a vast literature on number-theoretic computations exploiting
subfields. Subfields speed up discrete-log computations. They speed up
class-group computations, unit-group computations, etc. Known S-unit
attacks already exploit subfields in a variety of ways; see, e.g.,
https://cr.yp.to/papers.html#multiquad,
https://arxiv.org/abs/2002.12332,
and
https://cr.yp.to/talks.html#2021.08.20.
The notion that S-unit attacks are living in a separate world from
lattice-based cryptography was already disproven by the unit attacks
breaking the cyclotomic case of Gentry's original STOC 2009 FHE system,
the original multilinear-map system, etc. Subsequent S-unit attacks are
coming closer and closer to lattice KEMs in NISTPQC, and have already
broken through an amazing series of claimed attack "barriers" reviewed
in Section 1.2 of
https://ntruprime.cr.yp.to/latticerisks-20211031.pdf.
What's most important here is that there's no proof, not even a hope of
proving, that extra subfields are safe. I'll return to this below.
3. The modules-are-less-structured argument
There has been endless advertisement of rank-{2,3,4} MLWE systems as
supposedly reducing security risks compared to RLWE systems. Let's look
at what this difference means mathematically.
For concreteness, let's say
* a,e are secret vectors of length 1024 with small entries,
* G is a linear transformation of such vectors, and
* the public key is aG+e.
One of the basic attack goals is to find a,e given G,aG+e.
This description leaves many options for the set of transformations G.
For example, is it any 1024x1024 matrix? Or a matrix with a particular
structure? The choice of this set could have a big impact on security.
Let's compare three choices. I'll describe each choice as a \Q-algebra,
which means a ring R (not necessarily commutative) equipped with a ring
morphism from \Q to R. Here \Q is the field of rational numbers. I'll
abbreviate \Q-algebra as simply "algebra".
Each of these algebras acts as a set of linear transformations on the
module \Q^1024. I'll skip reviewing the standard mechanisms to extract
important "integral" subrings (e.g., extracting \Z[X]/(X^1024+1) from
\Q[X]/(X^1024+1)) and to reduce modulo q, giving linear transformations
on \Z^1024 and on (\Z/q)^1024.
Here are the three choices of algebras:
* "The LWE choice": \Q^(1024x1024), the algebra of 1024x1024 matrices
over \Q, with each matrix viewed as a linear transformation on the
module \Q^1024 in the usual way. There are 1048576 degrees of
freedom in a matrix.
* "The MLWE choice": (\Q[X]/(X^256+1))^(4x4), the algebra of 4x4
matrices with entries in the ring \Q[X]/(X^256+1). Each of these
matrices is viewed as a (\Q[X]/(X^256+1))-linear transformation,
and hence \Q-linear transformation, on the module
(\Q[X]/(X^256+1))^4, which is isomorphic to \Q^1024 as a \Q-module.
There are only 4096 degrees of freedom here. This is a small,
highly structured subalgebra of the first algebra.
* "The RLWE choice": \Q[X]/(X^1024+1), with each element viewed as
the linear transformation that multiplies by that element.
There are only 1024 degrees of freedom here. This is an even
smaller, even more highly structured subalgebra.
It's easy to prove that the third algebra is a subalgebra of the second,
and that solving various problems for transformations of the second type
implies solving various problems for transformations of the third type.
Sounds like switching from RLWE to MLWE is provably eliminating extra
structure and reducing risks, right?
NIST's official round-2 report cites this proof as part of its reason
for rejecting NewHope in favor of Kyber. Half of the NewHope text in
that report is discussing this proof. The report categorically describes
"RLWE schemes" (not just NewHope) as "the most structured", "MLWE" as
"intermediately structured", and "plain LWE" as the "least structured".
4. The fundamental flaw in the argument
The provable relationship between \Q[X]/(X^1024+1) and
(\Q[X]/(X^256+1))^(4x4) relies on the fact that \Q[X]/(X^256+1) is a
subfield of \Q[X]/(X^1024+1).
Having \Q[X]/(X^256+1) as a subfield is a _special structure_ of
\Q[X]/(X^1024+1). For example, the proof does not work---and there is no
hope of making it work---as a comparison to the following field:
* "Another RLWE choice": \Q[X]/(X^1013-X-1), with each element viewed
as the linear transformation that multiplies by that element.
Or, if one wants to precisely match dimensions, the following field:
* "Yet another RLWE choice": \Q[X]/(X^1024-X-1), again viewed as
linear transformations in the usual way.
The third algebra above isn't actually "the" RLWE choice: there are
_many_ choices of fields in each dimension, and the proof requires
restricting attention to a narrow, structured subset of these choices.
This means that enabling the proof required _imposing structure_ on the
starting field. From the perspective of the algorithm designer, this
structure is an attack tool!
The full S-unit lattice for a nontrivial S isn't hard to calculate for
the field \Q[X]/(X^256+1), thanks to the smallness and special structure
of this field. It's not that this has been publicly extended yet to a
break of problems for (\Q[X]/(X^256+1))^(4x4), but it's really not hard
to imagine that severe weaknesses of a ring also create weaknesses of
4x4 matrices over that ring.
Even people who have trouble seeing how such algorithms might work can't
deny the logical _possibility_ that the smallness and special structure
of \Q[X]/(X^256+1) will turn into a devastating attack against problems
for (\Q[X]/(X^256+1))^(4x4), making kyber1024 feasible to break.
The proof then says that, logically, there must also be a weakness for
\Q[X]/(X^1024+1). But this _doesn't_ logically imply weaknesses in most
other algebras. There's no reason to exclude the possibility of other
algebras being stronger: for example, \Q[X]/(X^1013-X-1) being stronger.
In this scenario, the proof is simply relating one weak problem to
another weak problem. What's terrifying is that _this is being used as
an argument to use a weak problem instead of a stronger problem_. People
are casually introducing a small structured ring into a cryptosystem and
blindly advertising the use of modules over this ring as risk reduction,
even though in this scenario it _damages_ security.
Are we supposed to worry about structured matrices and _not_ worry about
structured rings? This makes no sense. A direct comparison of the
algebras being used, the basic algebraic structure of the set of G used
for public keys aG+e, immediately shows that the move to modules
requires structure that otherwise would _not_ have to be there.
If the comparison is merely to NewHope, then the field \Q[X]/(X^256+1)
was present from the outset as a subfield. In this case, sure, insisting
on modules doesn't require adding any structure to the field. (There are
other problems with NIST's NewHope comparison; see below.) But this is a
_very_ special case.
Instead of acknowledging the special field structure enabling this
proof, NIST glibly generalizes, categorically describing "RLWE" as "more
structured" than "MLWE". That categorical statement is simply wrong!
For example, (\Q[X]/(X^256+1))^(4x4) has a subalgebra \Q[X]/(X^256+1)
with many special properties drawing the cryptanalyst's attention:
* The subalgebra is much bigger than \Q, but much smaller than the
full algebra.
* The subalgebra is in the "center" of the full algebra, meaning
that it commutes with every choice of G.
* The subalgebra is a cyclotomic field.
* This field has a tower of smaller subfields having the same
properties, such as \Q[X]/(X^128+1), plus further subfields.
This structure is not present in \Q[X]/(X^1013-X-1), a field whose only
subfields are itself and \Q.
People are being misled into believing that the proof says that the
entire class of RLWE problems of degree around n is broken if rank-4
MLWE problems of degree around n/4 are broken. No, that's _not_ what the
proof says. One needs to narrow attention to some _highly structured_
RLWE problems to make the proof work. Adding this structure to enable
the proof also adds dangers that the proof doesn't address.
5. Small fields in pre-quantum DH
Think back to the old days when people used a 1024-bit prime field
inside your favorite discrete-log system, let's say DH. (Not so old,
actually.)
Imagine someone saying "You should move to this matrix version of DH
using 4x4 matrices over \F_{2^256}. For similar cost in the following
metrics, this allows many more matrices, and the following 4x4 matrix
problems are provably secure if some \F_{2^1024} problems are secure."
Um, yes, okay, there are more matrices, and it's not crazy to _hope_ for
the non-commutativity to add security compared to \F_{2^1024}. But if
(1) the subfields of \F_{2^1024} are a security disaster (this turns
out to be the case) and
(2) non-commutativity isn't enough to rescue the system (it turns
out to provide _no_ security benefit; see 1997 Menezes--Wu)
then the new system is a security disaster while, as far as we know, the
old one isn't.
So the proof is relating one weak problem to another weak problem.
Meanwhile it's misleading people into switching away from a strong
problem, even though the proof actually says _nothing_ about the strong
problem---the same way that the MLWE-vs.-RLWE proof says _nothing_ about
RLWE using fields outside a specially structured class.
This DH example illustrates the structural dangers inherent in (1)
constructing systems "with the objective of being able to give security
reductions for them" and (2) using this as a risk-assessment mechanism,
judging systems as less risky if they have more "security proofs". What
the proofs actually say about security is limited, and the limitations
have to be carefully taken into account.
6. Modules as a danger separate from cyclotomics
In the above matrix example, \F_{2^256} has a tower of subfields that
are unnecessary for DH, the same way that \Z[X]/(X^256+1) has many
subfields that are unnecessary for lattice-based cryptography, along
with further unnecessary structure such as being cyclotomic.
What happens if one removes this structure? Consider 4x4 matrices over
the prime field \F_p, where p is a 256-bit prime. "Provable security"
then highlights a relationship between these matrices and a 1024-bit
finite field \F_{p^4}.
Is it correct to say that the matrices are less structured than 1024-bit
fields? No, of course not. The proof is only for _extremely special_
1024-bit fields, the ones that have 256-bit prime subfields.
If this subfield structure opens up vulnerabilities (variants of NFS
that exploit subfields are an active research area today), and if these
are also vulnerabilities of the matrix system (which, again, turns out
to be the case; see 1997 Menezes--Wu), then moving from a 1024-bit prime
field to the matrix system has damaged security. So how can one portray
the proof as saying that moving to matrices is reducing risks?
What one sees here is that simply imposing the matrix structure, working
with 4x4 matrices over a field with 1/4 as many bits, is creating a new
security risk, a risk that is _not_ addressed by the "security proof".
Similar comments apply to lattice systems. Cyclotomics are particularly
worrisome, but _even outside the cyclotomic context_ it's entirely
possible that moving to modules---for example, a rank-4 module over a
quarter-dimension ring---damages security. There's certainly no proof to
the contrary.
Consider, as concrete examples, the following \Q-algebras:
(1) \Q[X]/(X^1024-X-1).
(2) \Q[X]/(X^1024-X^4-1).
(3) (\Q[X]/(X^256-X-1))^(4x4).
(4) \Q[X]^(1024x1024).
Here the argument for modules says that #1 is a small, structured piece
of #4, which is worrisome, and that #2 is a small, structured piece of
#4, which is worrisome, and that #3 is an intermediate case between #2
and #4, making it less worrisome.
But #2 and #3 both have special structure that doesn't exist in #1!
This isn't as extreme as the tower of cyclotomics mentioned above, but
again there's an intermediate algebra with special properties catching
the cryptanalyst's eye as a potential springboard for attacks: both
algebras contain \Q[X]/(X^256-X-1), which is much bigger than \Q, much
smaller than the full algebra, and in the center of the full algebra.
The presence of this special intermediate field could expose #2 and #3
to attacks that don't apply to #1, a field whose only proper subfield is
\Q.
Advocating a leap from #1 to #3, and trying to justify this by pointing
to a proof relating #2 to #3, is indefensibly exaggerating what the
proof says, and ignoring the security risk of introducing extra
subfields---again, something that has a long history of being exploited.
To summarize, what the advertising wants people to believe is that the
Kyber matrices are provably at least as strong as commutative rings
(acting on vectors of similar dimension). What the proof actually says
is, at best, that the Kyber matrices are at least as strong as some
_very special_ commutative rings, the rings used in NewHope. The proof
does _not_ say that the Kyber matrices are at least as strong as _other_
commutative rings. Closing this gap is hopeless.
7. Credits and subsequent discussions
The fundamental flaw in the module advertising was already pinpointed in
Section 5.5 of
https://ntruprime.cr.yp.to/latticerisks-20211031.pdf (in
less detail than the explanation I've given above). I cited that paper
in my first message in the small-rounding thread:
Theorems that might give a "modules are guaranteed safe" impression have
limitations rendering them logically inapplicable to NISTPQC. See, e.g.,
Section 5.5 of
https://ntruprime.cr.yp.to/latticerisks-20211031.pdf.
A followup a few days later, the message I'm responding to, repeats
essentially the same advertising---hiding the flaw inside a false
hypothesis.
[ context omitted by the message I'm replying to: ]
> > Let's try an example: the experiments reported in Section 4.2 of
> >
https://www.ntru.org/f/hps98.pdf (the first paper Dr. Peikert cited).
> > These are attacking scaled-down versions of the original NTRU system.
[ context included in the message I'm replying to: ]
> > Why exactly is it supposed to be okay to count these experiments towards
> > "enough cryptanalysis" of systems that move to rank-3 modules over a
> > ring of only 1/3 dimension (e.g., kyber768),
> Because there are simple reductions that allow one to decrease the
> degree of the ring, while increasing the module rank.
> If R=Z[X]/(X^1024+1)
[ ... ]
> So if NTRU is over the ring R
This is the false hypothesis mentioned above.
The question was explicitly about the experiments in Section 4.2 of the
original NTRU paper. Those experiments used X^N-1 for various N; why
would they have detected security failures of X^N+1 with N being a power
of 2? Maybe failures of X^N+1 would show up for X^2N-1, but the
experiments didn't use any power-of-2 exponents. In short, no, this
isn't over the ring R; it isn't over _any_ power-of-2 ring.
The same hypothesis is, of course, also false for various NISTPQC KEMs,
such as sntrup1013.
Again, the (partial) proof that kyber1024 >= newhope1024 relies on the
fact that \Q[X]/(X^1024+1) has a much smaller subfield \Q[X]/(X^256+1).
What happens if the smallness and special structure of \Q[X]/(X^256+1)
turn out to be a security problem for _both_ kyber1024 and newhope1024,
a problem not shared by most fields---maybe even most cyclotomics?
Perhaps someone has in mind a _heuristic_ saying that if power-of-2
cyclotomics have a security problem then the problem will also show up
for many other cyclotomics. This essentially assumes away any potential
difference between power-of-2 cyclotomics and random cyclotomics. Maybe
this is a valid heuristic, but how well has it been tested?
Compare this to Dr. Peikert's message at the start of the small-rounding
thread: "The existing analyses essentially 'assume away' any potential
difference between deterministic rounding errors and random ones. Maybe
this is a valid heuristic, but how well has it been tested?"
If we're supposed to worry about the possibility that rounding loses
security, and the possibility that structure in matrices loses security,
shouldn't we worry about the possibility that power-of-2 cyclotomic
structure loses security, even compared to most cyclotomics? Where are
all the papers redoing every NTRU attack experiment for the power-of-2
case? Where are the objections to a lack of Kyber-specific experiments?
These possibilities are analogous in that none of these features have
been _proven_ secure. This is what I've been emphasizing in the last few
paragraphs---but it's also a _very_ low bar. Every lattice KEM has a
combinatorial explosion of features (obviously far beyond the number of
attack papers); only a narrow corner is proven secure; so it's trivial
to take any submission and selectively highlight a proof gap for that
submission. If this were done systematically then people would realize
how content-free it is. Instead it's against a backdrop of many other
proofs being indefensibly exaggerated, such as the MLWE-vs.-RLWE proof.
Correcting the exaggerations and studying how little the proofs actually
say makes clear that risk management has to be driven instead by the
literature designing and analyzing _algorithms_. What's truly concerning
about subfields, cyclotomics, etc. isn't the minimal point that they
haven't been proven safe. What's truly concerning is that these
structures have already been exploited for many years to speed up a wide
range of number-theoretic computations---often quite dramatically, as
illustrated by the quantum polynomial-time break of Gentry's original
STOC 2009 FHE system for cyclotomics (under minor assumptions), and the
non-quantum quasi-polynomial-time break of an analogous system using
multiquadratics (which aren't cyclotomic but have many subfields).
I understand that the magnitude of the danger isn't obvious to everyone.
But the idea that the danger has been _proven_ not to exist is absurd.
There's a specific fiction that leaping to rank-{2,3,4} modules, while
dropping the field degree correspondingly, has been proven to not lose
security; let's get that corrected for the record.
> what is your "concern" with the above?
First of all, for a mathematician, _any_ proof exaggeration is
unacceptable.
NIST wrote "In a technical sense, the security of NewHope is never
better than that of KYBER", pointing to a proof. But simply reading what
the proof says and comparing it to the NewHope-vs.-Kyber example shows
many reasons that, no, the proof _doesn't_ say that Kyber is at least as
strong as NewHope. (See generally my email dated 25 Jul 2020 10:36:57
+0200.) NIST didn't even bother to systematically _list_ those reasons,
never mind presenting an evaluation of the corresponding security risks.
This is horrifying.
The specific exaggeration that I'm emphasizing in this message doesn't
matter for NewHope in particular: as noted above, NewHope already had
\Q[X]/(X^256+1) present as a subfield. However, as part of the same
official report, NIST stated a much broader categorical preference for
"low-rank MLWE schemes over RLWE schemes"---evidently blind to the fact
that this leap requires inserting extra structure that could damage
security, structure that a pure RLWE system doesn't need. The proof does
_not_ provide any reason to think that (\Q[X]/(X^256+1))^(4x4) is as
safe as \Q[X]/(X^1013-X-1).
In short, readers are being led to believe that inserting the small
field \Q[X]/(X^256+1) into a system design is guaranteed safe---but the
argument presented for this starts by hypothesizing that \Q[X]/(X^256+1)
was already present as a subfield! This makes no sense.
> Is it:
> 1. It's possible that Module-LWE over random A'' is easy (so
> Kyber-1024 is easy), but Module-LWE over a structured A'' (i.e. the
> one you get from transforming A-->A'') is hard?
I have a clarification question regarding two general words here, "easy"
vs. "hard": are these supposed to refer to some _big_ gap in security?
Kyber's security margin seems to keep dropping and seems very small
(possibly nonexistent), so a small improvement in attacks could mean
that Kyber doesn't reach its claimed security level.
The rest of the specific text here seems like a pointless distraction
from the elephant in the room, the fact that the proof at hand compares
MLWE using \Q[X]/(X^256+1) to an _extremely restricted_ type of RLWE
system, one where \Q[X]/(X^256+1) is already present as a subfield.
> 2. That this reduction works only for dimensions that are a power of
> 2, so we can't say anything about Kyber-768 which has a 3x3 matrix A
> (i.e. maybe it's as hard as Kyber with a 2x2 matrix over the same
> ring).
In the absence of further information, one has to guess that the 3x3
case is stronger than the 2x2 case. But this is a weak statement given
that the 2x2 case is already on the bleeding edge against known attacks
and that the 3x3 case claims a _much_ higher security level.
Why resort to some weak 3-to-2 relation instead of talking about how
Kyber relates to some degree-768 extension of \Q[X]/(X^256+1), such as
\Q[X,Y]/(X^256+1,Y^3-Y-X)? Could this be because presenting such an
extension helps the reader realize that the 256*3 -> 768 proof is
restricted to a narrow, specially structured set of degree-768 fields?
> Both of these "objections", however, are counter to a lot of the
> theory that underlies lattice cryptography. (1) contradicts that
> random matrices A are the hard instances and (2) contradicts that
> increasing the dimension makes the lattice problem harder.
This text also seems like a distraction from the elephant.
> So if you're going to state hypotheses that go against established
> theory, you need to have some evidence. Otherwise, one can state anything
> that can't be proved or disproved -- (e.g. do you have concrete proof
> contradicting the Qanon and the covid vaccine conspiracies? No? Then I
> guess you should consider them to be as valid as their negations.)
This level of rhetoric regarding the nature of evidence is in a message
that starts from an outright false assumption regarding the stated
context and ignores an earlier reference to a paper explaining the gap?
Hmmm.
> The difference between saying (1) LWE is hard, but LWR with small
> rounding could be easy and (2) NTRU is hard, but Module-LWE could be
> easy is that (1) does not contradict any of the underlying theory or
> understanding upon which lattice crypto is built, whereas (2) would
> mean that the core of the established theory is wrong.
Please clarify.
What's covered by "the underlying theory or understanding upon which
lattice crypto is built"? Is this excluding, e.g., the original LLL
paper, the original BKZ paper, the original NTRU paper, and many more
years of design and analysis of algorithms? If this excludes papers on,
e.g., class-group computations, then what's the rationale for excluding
those, and how does this avoid making the argument circular?
How precisely would weaknesses triggered by the smallness and special
structure of \Q[X]/(X^256+1) contradict "the established theory"?
It's disturbing to see a message that's so driven by "security proofs"
that it makes a hypothesis that's false regarding the explicit context
and often false for NISTPQC---simply _assuming_ that one is starting
with particular field structures, because this is what has to be done to
make the proof work---and then tries to lead the reader to believe that
the existing "theory and understanding" guarantee the safety of modules,
evidently with no need to even _look_ at what's known about algorithms
exploiting those field structures.
---Dan