two interesting new papers

406 views
Skip to first unread message

Christopher J Peikert

unread,
Jul 21, 2016, 3:07:18 PM7/21/16
to cryptanalytic-algorithms
Today eprint has two new papers on topics recently discussed on this list.

Kirchner and Fouque http://eprint.iacr.org/2016/717 give improved variants of subfield attacks on NTRU, and compare them to standard lattice basis-reduction attacks.  (Recall that the subfield attacks work increasingly well as the parameters become more "overstretched," i.e., as the modulus q grows relative to the size of the secret key.)

One of Kirchner-Fouque's main claims (enabled by new analysis of NTRU lattices) is that standard basis-reduction attacks actually perform as well as, or even better than, the subfield attacks.  A consequence is that the standard attacks apply just as well to NTRU Prime, which was designed to have no non-trivial subfields.

The authors offer many interesting conclusions:

We conclude that the shortest vector problem over module lattices seems strictly easier than the bounded distance decoding.

(NTRU is as example of the former; Ring-LWE of the latter.)

Since the practical cost of transforming a NTRU-based cryptosystem into a Ring-LWE-based cryptosystem is usually small, especially for key-exchange (e.g. [3]), we recommend to dismiss the former, in particular since it is known to be weaker (see [40, Section 4.4.4]).

(Caveat: [40, Section 4.4.4] shows that for appropriate parameters and formalizations of the problems, NTRU <= search-RLWE.  The reduction is not so tight in terms of error sizes, but there are many plausible avenues for improvement.)

There is much more in the paper, and a lot to digest and verify.  It should be noted that the authors have a strong track record of cryptanalysis of lattice crypto.


Bai, Laarhoven and Stehlé http://eprint.iacr.org/2016/713 address the exponential space usage of lattice sieving algorithms, which are the asymptotically fastest known class of algorithms for SVP etc. They reduce the required memory by using tuples of vectors rather than pairs, at the expense of runtime.

For example, using triples they estimate the space complexity to be 2^{0.1887n+o(n)}, improving on the previous best of 2^{0.2075n+o(n)}, with a time complexity as low as 2^{0.4812n+o(n)}.  (All correctness and complexity estimates are heuristic.)

They also "analyze the effects of using larger tuples for reduction, and conjecture how this provides a continuous tradeoff between the memory-intensive sieving and the asymptotically slower enumeration."  Such a tradeoff could potentially allow "hybrid" sieving-style algorithms to be used in large dimension, without using astronomical memory.

Sincerely yours in cryptography,
Chris


Reply all
Reply to author
Forward
0 new messages