yet another attack on ideal lattices

517 views
Skip to first unread message

D. J. Bernstein

unread,
Oct 10, 2015, 6:37:09 PM10/10/15
to cryptanalytic-algorithms
Chen, Lauter, and Stange have announced two fast attack strategies
against various large instances of Search-Ring-LWE:

https://eprint.iacr.org/2015/971

The targeted rings are Galois, with a provable equivalence between
Search-Ring-LWE and Decision-Ring-LWE, and this equivalence is actually
exploited by the attack. It will be interesting to hear how this is
explained by people who have been advertising such equivalences as an
indication of security.

Peikert wrote that "LPR'10 proves that search-R-LWE is (quantumly) at
least as hard as worst-case Ideal-SVP, in the ring of integers R of
*any* number field". If this is correct then the new paper also breaks
worst-case Ideal-SVP for the targeted fields, also undermining the
notion that worst-case-to-average-case reductions are an indication of
security---the reductions here are attack tools!

The new paper also means that the scorecard of broken ideal-lattice
problems now includes not just some short-generator problems (e.g.,
poly-time post-quantum key recovery for the Smart--Vercauteren system)
but also some Search-Ring-LWE problems. If the choice of ring doesn't
matter for security then apparently we have to throw away _all_ crypto
based on the Ring-LWE problem.

I don't think the situation is actually so bad. All of the exciting new
attacks are relying on interesting number-theoretic features of the
targeted rings and moduli. Number-theoretic features exploited by the
main Chen--Lauter--Stange attack include

* a Galois field (converting a decision attack into a search attack),
* a modulus factoring into prime ideals of small norm (so the
quotient fields can be searched), and
* an extra condition that seems hard to characterize but that the
authors say is encouraged by index-2 subfields.

As a historical note, my ideal-lattice blog post recommended

* using "a very large Galois group" (exactly the opposite of using a
Galois field),
* taking a modulus whose quotient ring "is a field" (so the only
factor has huge norm), and
* taking a field "of prime degree" (so there aren't any intermediate
subfields).

I said that I was "eliminating the structures that I find worrisome in
existing ideal-lattice-based encryption systems", but of course there's
some amount of guesswork here; what ultimately matters is how far the
attacks can be pushed.

---Dan

Christopher J Peikert

unread,
Oct 10, 2015, 10:19:37 PM10/10/15
to cryptanalytic-algorithms
On Sat, Oct 10, 2015 at 6:37 PM, D. J. Bernstein <d...@cr.yp.to> wrote:
Chen, Lauter, and Stange have announced two fast attack strategies
against various large instances of Search-Ring-LWE:

   https://eprint.iacr.org/2015/971

The targeted rings are Galois, with a provable equivalence between
Search-Ring-LWE and Decision-Ring-LWE, and this equivalence is actually
exploited by the attack. It will be interesting to hear how this is
explained by people who have been advertising such equivalences as an
indication of security.

The explanation is simple: this paper attacks an instantiation of R-LWE with a very narrow error distribution -- much narrower than any theoretically sound or practically recommended ones, and certainly too narrow to allow any connection with worst-case lattice problems.  Indeed, efficient attacks on LWE (and hence R-LWE) for such narrow error distributions have been known for several years.

Background: the (Ring-)LWE problems are parameterized by an error distribution.   We have proofs that for appropriate Gaussian error distributions, these problems are hard on the average, assuming the worst-case hardness of certain lattice problems (on ideal lattices, in the case of R-LWE).  For R-LWE, the hardness proof requires the coefficients of the error polynomials to have standard deviation exceeding sqrt{n log n}.

Table 1 on page 8 of Chen-Lauter-Stange shows the instantiations the authors attacked.  The parameter sigma_0 is essentially the standard deviation of the error polynomials.  The largest value of sigma_0 they considered was 1.25, and the attacks the authors actually ran to completion were for sigma_0 = 1.  The authors write "The parameters sigma_0 in Table 1 represent the boundary of the power of our attack, i.e., we tried higher sigma_0 and the attack failed."  Last time I checked, 1 was much smaller than sqrt{n}.

Prior to this work, we already knew a fair bit about LWE with small errors.  For example, the work of Arora and Ge (ICALP'11) showed that LWE -- and hence R-LWE, which is a special case -- can become much easier when errors are small.  In particular, if the errors have magnitude bounded by a constant, then LWE is solvable in polynomial time, given a large enough polynomial number of samples (see their Theorem 3.1 for the formal statement).  So for Gaussian errors with constant standard deviation, we can solve LWE in polynomial (or perhaps quasi-polynomial) time.

The algorithm of Chen-Lauter-Stange is certainly different from, and probably faster than, the one of Arora-Ge in this particular setting.  But the point is that we've known for awhile that (Ring-)LWE with such small errors is easy to break, without using any number theory whatsoever (when unlimited samples are available, at least).
 
Peikert wrote that "LPR'10 proves that search-R-LWE is (quantumly) at
least as hard as worst-case Ideal-SVP, in the ring of integers R of
*any* number field". If this is correct then the new paper also breaks
worst-case Ideal-SVP for the targeted fields, also undermining the
notion that worst-case-to-average-case reductions are an indication of
security---the reductions here are attack tools!

My statement is correct, but your conclusion "the new paper also breaks worst-case Ideal-SVP" is incorrect.  Again, this is because the authors attacked a version of Ring-LWE that does not come close to satisfying the hypotheses of the worst-case hardness theorem from LPR'10.

(To be clear, the authors of the paper never claim to break Ideal-SVP.)
 
The new paper also means that the scorecard of broken ideal-lattice
problems now includes not just some short-generator problems (e.g.,
poly-time post-quantum key recovery for the Smart--Vercauteren system)
but also some Search-Ring-LWE problems.

Let the record show that the scorecard for cryptographic (i.e., average-case) lattice problems stands as follows:

* Instantiations of (Ring-)LWE following the worst-case hardness proofs: no known subexponential-time (in n) attacks, even using quantum.

* Problems/instantiations lacking worst-case hardness proofs: too many broken ones to count.

To date, worst-case hardness proofs are pitching a shutout.

Sincerely yours in cryptography,
Chris


Christopher J Peikert

unread,
Oct 12, 2015, 4:52:39 AM10/12/15
to cryptanalytic-algorithms
Brief addendum:


* Instantiations of (Ring-)LWE following the worst-case hardness proofs: no known subexponential-time (in n) attacks, even using quantum.

To be clear, this refers to the typical parameterization with an inverse-polynomial noise "rate" relative to the modulus (corresponding to polynomial approximation factors for worst-case lattice problems).  For smaller noise rates, standard time-approximation tradeoffs (e.g., Schnorr) apply.

Chris
Reply all
Reply to author
Forward
0 new messages