S-unit attacks

470 views
Skip to first unread message

D. J. Bernstein

unread,
Aug 3, 2016, 5:32:35 AM8/3/16
to cryptanalyti...@googlegroups.com
Getting back to a serious algorithmic topic now: S-units are standard
generalizations of units in algebraic number theory. This message
focuses on S-units as tools for cryptanalysis of lattice-based
cryptography, generalizing the known uses of units.

The first part of the message explains how to easily get around a few
minor obstacles (which have been portrayed as serious obstacles) in
generalizing the relevant unit computations to S-units.

The second part shows how a trivial observation about units (which has
been portrayed as new, and as specific to a particular class of fields)
works for arbitrary fields and is a small special case of a well-known
(and minor) NFS speedup called "free relations".

The third part evaluates the relative plausibility of some potential
attack avenues, drawing on what the community has learned from
experience with NFS and FFS.

Christopher J Peikert writes:
> The situation is even worse: when |S| \geq 2, the log-embedding of the
> S-units may not even be discrete, i.e., it is not a lattice (cf. the
> log-unit lattice).

This is not a problem at all: one simply extends the logarithm map in a
standard way, taking not just the logarithms of absolute values at
"infinite places" but also the logarithms of absolute values at all of
the "finite places" in S. For example, when the field is \Q,

* the absolute value |a/b|_infinity is the usual |a/b|;
* the absolute value |(a/b)2^e|_2 is 1/2^e for any odd a,b;
* the absolute value |(a/b)3^e|_3 is 1/3^e for any a,b coprime to 3;
* etc.

This generalization of the logarithm map is concisely reviewed in, e.g.,

http://www.math.clemson.edu/~jimlb/ConferenceTalks/PCMI2009/tatecourse2.pdf

and the fact that S-units are a lattice under this map (modulo roots of
unity) is explicitly stated there; see Theorem 2.1.3.

All of the important computations for infinite places are also feasible
(usually easier!) for finite places. The idea of

* solving a close-vector problem in the unit lattice, to recover a
short g from any unit multiple ug

generalizes straightforwardly to

* solving a close-vector problem in the S-unit lattice, to recover a
short g from any S-unit multiple ug.

The underlying algorithmic questions---e.g., which number fields let us
find a short basis for the lattice---also generalize straightforwardly.
I already pointed to the relevant chapter of the Cohen textbook.

> When the original log-unit attack involves decoding a lattice, and
> moving to S-units leaves you without even a lattice to decode, I'd say
> that's a pretty relevant distinction.

It isn't relevant. There _is_ a lattice.

> One can recover some kind of discreteness by adding extra
> "coordinates" to the log-embedding

Exactly. This "kind of discreteness" is what's normally called a
"lattice". Why waste time talking about irrelevant non-lattices?

> But this new space is non-Archimedian, so it has little if any
> connection with the geometry of the ring, where we are trying to find
> a short element of a given ideal.

I have no idea what Peikert thinks he means by "geometry" here, but he's
clearly missing something like 100 years of development of what number
theorists call "geometry".

More to the point, non-Archimedian absolute values respect shortness in
very much the same way that Archimedian absolute values do, plus the
extra advantage that shortness isn't spoiled by any number of field
additions---which makes the fundamental computations _easier_.

> The analogous "log-S-unit attack" wouldn't even give you an element of
> the input ideal, but instead one in some other fractional ideal with
> arbitrary extra (positive or negative) powers of the ideals in S.

No, this is totally backwards.

For clarity let's focus on the most spectacular examples of these
attacks so far, the poly-time PQ key-recovery attacks (for example,
against Gentry's original FHE scheme at STOC 2009), where the key is a
short generator g of the specified ideal I. There are two attack stages:

* finding _some_ generator gu of I;
* finding the secret key g from gu.

Generalizing the second stage from the unit lattice to a larger S-unit
lattice means allowing u to be any S-unit rather than merely a unit,
i.e., allowing the first stage to start from various multiples of I
rather than merely I itself. Of course one has to work out quantitative
details regarding how large g is, how short the known basis is, etc.,
but all of this will vary rather smoothly with S.

In the cyclotomic case, this flexibility is overkill---there's already a
solid break without this flexibility in the first stage. But this isn't
indicating that there's any problem with the generalized attack, and it
also doesn't say anything about whether the flexibility is useful for
other examples.

Peikert is imagining the reduction modulo S-units as starting from a big
generator of the correct ideal I and then being thrown off course by the
presence of S, producing a generator of some IZ where Z is S-smooth. But
the cryptosystem constructs g to be very short (otherwise it wouldn't be
able to decrypt!), significantly shorter than one would expect for any
IZ for any reasonable size of S---and being able to find this is simply
a matter of having a short enough basis for the S-unit lattice.

To recap:

* Original attack: Obtain short gen of I from any gen of I.
* More useful: Obtain short gen of I from any gen of any IZ.
* Less useful: Obtain short gen of some IZ from any gen of I.

Moving from the first line to the second line is good. Moving from the
first line to the third line would be bad, but it's not at all what's
happening. The third line has the Z on the output, where the second line
has it on the input. This is what "totally backwards" is referring to.

End of part 1.

Now here's a trivial observation: Define f as, say, the polynomial

x^10+31x^9+41x^8+59x^7+26x^6+53x^5+58x^4+97x^3+93x^2+23x+84.

If r is a complex root of f then

84 = -r(r^9+31r^8+41r^7+59r^6+26r^5+53r^4+58r^3+97r^2+93r+23)

so choosing S to make 84 an S-unit forces the factors r and r^9+... to
also be S-units. This trivial observation about a few "extra" S-units
immediately generalizes to arbitrary number fields.

Peikert has been frequently repeating one special case of this trivial
observation: constant coefficient +-1. This case is extremely common in
lattice-based cryptography, since larger coefficients end up requiring
more bandwidth for the same security level against known attacks. Before
I come back to security analysis (part 3 below), let me say more about
this random x^10+...+84 example.

Which prime ideals do we put into S to make 84 an S-unit? Take any prime
divisor p of 84, and any irreducible divisor g of the image of f in
\F_p[x]. Then the ring R = \Z[r] has a ring map r|->x to the field
\F_p[x]/g, and the kernel of this ring homomorphism is a prime ideal of
R containing 84. Define S as the set of these prime ideals. The product
of elements of S is 42R, and repeating the ideals over 2 produces
exactly 84R. (This polynomial f has squarefree discriminant, so I'm
skipping over some algebraic details of the right way to define
factorizations for non-maximal orders.)

In particular, for each prime divisor p of 84, taking g=x produces the
prime ideal pR+rR. It's easy to see that the product

(2R+rR)^2 (3R+rR) (7R+rR)

is exactly rR, showing that r is an S-unit. What's important about this
perspective is that it shows that all of the S-unit information that I'm
talking about is completely captured by the factorizations of 2R, 3R, 7R
into prime ideals.

More generally, one can quickly factor pR into prime ideals for _any_
prime number p, not just for the rare prime numbers that divide the
constant coefficient 84 (i.e., that produce prime ideals specifically of
the form pR+rR). This generalization is exactly what's called a "free
relation" in NFS: if all the prime ideals dividing pR are in the NFS
"factor base" (which is typically chosen as all the first-degree primes
up to some limit, although this isn't the optimal choice in general)
then the factorization of pR into those ideals is an easily computed
"relation" between the ideals.

The overall impact of free relations on NFS performance is small, and
becomes increasingly small as the NFS degree goes up: almost all of the
relations are found by more expensive searches. It doesn't matter here
whether NFS is being used for class-group/unit-group computations (as
in, e.g., https://blog.cr.yp.to/20140213-ideal.html) or for traditional
factorization or discrete logs.

> But it still remains the case that: (1) Z[x]/(x^p-x-1) is a very
> specific class of rings,

Z[x]/(x^p-x-1) is no more "specific" than the rings Z[x]/(x^2^k+1) that
are recommended in most ideal-lattice papers. The arguments for random
rings in lattice-based cryptography are even weaker than the arguments
for random curves in ECC---and, even in the unlikely event that users
end up generating their own rings, each user will want to be assured of
the security of the _specific_ ring holding that user's public key.

Figuring out security requires exploring attack algorithms---and the
resulting insights rarely have any connection to superficial notions of
how "specific" the target is. There are many broad classes of systems
that turned out to be far weaker than single "specific" systems.

> and (2) we can analytically find at least a few short independent units.

This is exactly the trivial observation stated above. What's funny is
that Peikert keeps stating this for NTRU Prime in particular, not
realizing that it's an extreme special case of an ancient, well-known
NFS tweak for all number fields.

> This is very different behavior from "random" or "hard" lattices. 
> It's also quite different from what we get with S-units

No. This trivial observation generalizes immediately to S-units in all
number fields, as illustrated by my random example above.

End of part 2.

Part 3: What are the most plausible avenues for extending the poly-time
post-quantum key-recovery attacks and subexponential-time pre-quantum
key-recovery attacks against cyclotomic Gentry, SV, Gentry--Halevi, etc.
to other rings? For example, might they work for the NTRU Prime rings?

These attacks rely critically on being able to find a short basis for
the unit lattice. Finding _any_ basis is what Cohen's famous textbook
calls one of the five "main computational tasks of algebraic number
theory". The scorecard for most number fields is

* pre-quantum: subexponential to find a basis (NFS), exponential to
find a short basis;

* post-quantum: polynomial (Eisenträger--Hallgren--Kitaev--Song,
building on Hallgren's work more than a decade ago, building on
Shor) to find a basis, exponential to find a short basis.

The only faster algorithms are for various number fields with unusually
small Galois groups---e.g., cyclotomics. NTRU Prime chooses a polynomial
x^p-x-1 with the largest possible Galois group, making automorphism
computations as difficult as possible.

Could the previous literature have missed some way that a short basis is
easy to write down for this particular polynomial? Sure, it's possible,
but to make it plausible one needs to either

* identify an attack strategy that works for _most_ polynomials and
that the previous literature missed, or

* identify a feature that distinguishes this polynomial from most
polynomials in an algorithmically useful way.

The first approach, if successful, is the sort of nightmare scenario
that could motivate staying away from ideal lattices entirely. If we
have so little understanding of the difficulty of such fundamental
ideal-lattice problems, why should we believe that ideal-lattice crypto
is based on anything more than security through obscurity?

Fortunately for ideal-lattice-based crypto, the topic doesn't seem to be
_so_ poorly understood! There are advances, but the core ideas seem to
be reasonably stable.

Peikert has instead been trying to follow the second approach,
repeatedly pointing out that the particular NTRU Prime polynomial allows
construction of a few units. But

* this trivial observation generalizes immediately to any polynomial
with constant coefficient +-1, which is already a rather broad
class; and

* the further generalization to S-units covers _all_ polynomials.

Peikert has been trying to distinguish S-units from the special case of
units, but the distinctions don't stand up to examination.

All of the information that Peikert is finding is a strict subset of the
information already captured by the ancient idea of "free relations" in
NFS---the most important algorithm to compute unit groups. It's still
possible that previous researchers ignored some particularly fast
special cases, but Peikert hasn't provided any reason to believe this;
he has merely reinvented a small part of an old idea.

Much faster small-characteristic FFS algorithms in recent years start
with something that can be viewed as a special type of free relation but
then make critical use of automorphisms to amplify this relation---the
automorphisms create a much more rigid function-field structure.
Similarly, a single short unit (x^3-1)/(x-1) in the cyclotomic ring
\Z[x]/(x^1024+1) isn't useful by itself, but automorphisms suddenly
produce a full-rank list of short units (x^3i-1)/(x^i-1), and the
attacks mentioned above rely critically on this.

These attacks aren't coming from a vacuum: serious study of related
algorithms has already given us a good idea of what the important tools
are for attackers. Automorphisms are very high on the list---and the
most plausible source of extended attacks.

ECRYPT recommended against small-characteristic fields for discrete-log
cryptography in 2005 ("Some general concerns exist about possible future
attacks ... As a first choice, we recommend curves over prime fields").
NTRU Prime similarly recommends against extra automorphisms. Both of
these recommendations have the virtue of preserving (often improving!)
network traffic, so one doesn't have to ask whether increased traffic
would be better used in other ways.

For comparison, recommending larger polynomial coefficients has vastly
less rationale---the risk that free relations (without automorphisms)
are dangerous, but only for units and somehow not for S-units---and
_would_ increase network traffic. Recommending LWE has the same issue.
From everything we know, those recommendations would _reduce_ security
for any particular amount of network traffic.

---Dan
Reply all
Reply to author
Forward
0 new messages