faster LWE attacks

317 views
Skip to first unread message

D. J. Bernstein

unread,
Jun 20, 2015, 3:04:34 AM6/20/15
to cryptanalyti...@googlegroups.com
Kirchner and Fouque have an upcoming paper at Crypto with further
improvements in the BKW line of attacks against LWE:

https://eprint.iacr.org/2015/552

This paper reports, e.g., a 2^102 attack against the "medium"/"about
2^150 operations" Lindner--Peikert LWE parameters; a 1.27^(n/lg lg n)
attack against the Lyubashevsky--Palacio--Segev "provably as secure as
subset sum" cryptosystem; and a 2^(n/(2 log log q)) attack against NTRU.

---Dan

Christopher J Peikert

unread,
Jun 20, 2015, 1:55:27 PM6/20/15
to cryptanalytic-algorithms
This work of Kirchner and Fouque is very good, and its claims look correct.  In particular, it estimates a 2^102 attack on certain LWE parameters related to those given in Lindner-Peikert.

However, KF does *not* give -- nor does it claim -- any improved attack on the actual LP parameters, "medium" or otherwise.

How can both of these statements be true?  The issue is the parameter m representing the number of LWE samples available to the attacker.  It is well-known that LWE appears to become appreciably easier as m grows very large.  In the LP parameters, m is small, but the KF attack needs m to be huge.

Specifically, Figure 4 in LP gives concrete parameters for its LWE-based cryptosystem, in which the number of samples m <= 3n is a few hundreds.  The security analysis reflects that fact, by explicitly ruling out BKW-like algorithms that require too many samples (see Footnote 3).

Table 2 in KF estimates attack times for nearly the same LP parameters, except that "We assume that we have access to as many samples as we want, which is not the case in the [LP] cryptosystem."  E.g., for LP's "medium" parameters n, q, s, the KF algorithm requires m ~ 2^82 samples and space.

In general, BKW-like algorithms (including KF) require a huge number of samples to achieve their best runtimes, but cryptosystems typically provide only a small number of samples.  It is a major open problem to make such algorithms work nearly as efficiently in the low-sample regime as in the high-sample one.  (Work by Lyubashevsky in RANDOM'05 makes progress on this question, but there is still a big gap.)

As for NTRU and subset-sum/LPS, the KF paper gives some very neat asymptotic improvements, which arise from the fact that the secrets are binary or ternary, rather than Gaussian as in LWE cryptosystems like LP.  However, KF's algorithms do not yet yield concretely better attacks for any suggested parameterizations: page 2 says "Note that there is a large value hidden in the o(1) term, so that our algorithm does not yield practical attacks for recommended NTRU parameters."  Perhaps this will change with further work.

Chris

pole.k...@gmail.com

unread,
Jun 20, 2015, 10:39:36 PM6/20/15
to cryptanalyti...@googlegroups.com
- On the proven side of BKW with a small number of samples, one can look at section 5.2.
Assuming Gaussians behave nicely, 2n samples* with error sigma can be tranformed in an unbounded LWE with secret of size sigma and errors of size sqrt(3)sigma^2*n.
The simulation then predicts a number of bit operations of ~ 2^156 (*9).
However, BKW behaves much more nicely than some worst-case oracle.
Many have amplified the number of samples without problems, see http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.137.3510&rep=rep1&type=pdf section 4 for example.
So the real runtime of a BKW attack against the LP cryptosystem is still unclear, I just gave a lower bound.

Also, if we are given a large number of messages, one can do a Fourier for each of them and succeed only if the sender used small errors.

* : in fact there are 2n+128.

- On the other hand, lattice algorithms can be simply analyzed, and are for small instances or low noise instances faster.
BKZ reducing the dual of the basis given in appendix B.1 with blocksize 165, and using 324 samples (<n+128)  gives a basis where a Babai will succeed with probability >2^-67.
The total number of nodes in the enumeration should be less 2^80 so that the number of bit operations is at most 2^90.
Recovering one message (given only one message) should take ~ 2^120 nodes with LP enumeration, and even less with extreme pruning.

The dual algorithm (appendix B.2) with blocksize 175 needs ~ 2^21 short vectors (and "independent").
Assuming they can all be generated in one enumeration, this gives ~<= 2^85 nodes.
Adding a Fourier step improves on this figure by ~ 2^5.


" which arise from the fact that the secrets are binary or ternary, rather than Gaussian as in LWE cryptosystems like LP." : not really, being small in the L2 sense is enough (section 3.6).
In the LP case, the variance is ~ 10 which is quite small in absolute value, and much smaller than what is covered by Regev's reduction (~n).

Paul
Reply all
Reply to author
Forward
0 new messages