more Ring-LWE attacks

631 views
Skip to first unread message

D. J. Bernstein

unread,
Feb 27, 2016, 2:36:19 AM2/27/16
to cryptanalyti...@googlegroups.com
https://eprint.iacr.org/2016/193.pdf presents a "new infinite family of
Galois number fields" with broken Ring-LWE problems "where the relative
standard deviation parameter is allowed to grow to infinity" (so the
Arora--Ge method doesn't take polynomial time). Examples of successful
<1-hour attacks are given with scaled error width as high as 14.

Also of interest, and related: A few days ago, in his invited PQCrypto
2016 talk "Challenges for Lattice Cryptography", Steven Galbraith
pointed to several recent papers on NTRU/Ring-LWE, and expressed
skepticism as to whether these systems are ready for standardization. I
think videos will be posted eventually, so we'll be able to see his
exact words.

The next day three NIST representatives served on a panel regarding NIST
standardization of post-quantum cryptography. I asked them what they
thought about this. They said it was important to read the papers---of
course true, but the real question is how the papers are going to be
converted into risk assessments.

---Dan

Christopher J Peikert

unread,
Feb 27, 2016, 8:36:44 AM2/27/16
to cryptanalyti...@googlegroups.com
On Saturday, February 27, 2016, D. J. Bernstein <d...@cr.yp.to> wrote:
https://eprint.iacr.org/2016/193.pdf presents a "new infinite family of
Galois number fields" with broken Ring-LWE problems "where the relative
standard deviation parameter is allowed to grow to infinity" (so the
Arora--Ge method doesn't take polynomial time). Examples of successful
<1-hour attacks are given with scaled error width as high as 14.

It is indeed important to read the papers.

Please note that this paper (along with all prior ones) applies to an ad-hoc variant of RLWE that is not backed by any worst-case hardness theorem.  For instantiations of RLWE that admit such theorems, there is still no published attack that meaningfully outperforms generic attacks that are agnostic to the ring structure (and which take time at least exponential in the ring dimension for typical parameters).

Specifically in this case:

* The attack applies to a "non-dual" variant of RLWE (see page 2), where errors are generated from a spherical Gaussian over the ring R, not its dual ideal R^vee.  The importance of using R^vee for security and tightest parameters is thoroughly explained in the original RLWE paper (Section 3.3, "Why This is the Right Definition of Ring-LWE") and its companion "toolkit" paper.

* Moreover, a paper by Castryck, Iliashenko, and Vercauteren (appearing at the upcoming Eurocrypt) explains why using non-dual RLWE, as was done in the ELOS Crypto'15 attack (previously discussed in this space), can reveal LWE equations having no errors at all, which are trivially insecure.  I can't find the full paper yet, but this blog post by the authors summarizes the work.

Also of interest, and related: A few days ago, in his invited PQCrypto
2016 talk "Challenges for Lattice Cryptography", Steven Galbraith
pointed to several recent papers on NTRU/Ring-LWE, and expressed
skepticism as to whether these systems are ready for standardization. I
think videos will be posted eventually, so we'll be able to see his
exact words.

The only other recent papers I'm aware of (e.g., this one and this one) all concern NTRU-type problems that also lack worst-case hardness theorems; the authors explicitly say that their attacks do not apply to RLWE (properly instantiated).

In summary, worst-case hardness proofs are still "pitching a shutout" over ad-hoc problems that lack such proofs (and the scorecard is even more lopsided than before).  As I have written before:

My final conclusion is that worst-case security reductions are really important in lattice cryptography, where there is so much rope to hang oneself with (i.e., flexibility in generating random instances). In return for the discipline they impose, worst-case reductions give a hard-and-fast guarantee that the cryptosystem is at least as hard to break as the hardest instances of some underlying problem. This gives a true lower bound on security, and prevents the kind of unexpected weaknesses that have so often been exposed in schemes that lack such reductions.

Recent developments further reinforce this point.

Sincerely yours in cryptography, Chris

Christopher J Peikert

unread,
Feb 27, 2016, 11:20:55 AM2/27/16
to cryptanalytic-algorithms
A quick addendum:
 
Also of interest, and related: A few days ago, in his invited PQCrypto
2016 talk "Challenges for Lattice Cryptography", Steven Galbraith
pointed to several recent papers on NTRU/Ring-LWE, and expressed
skepticism as to whether these systems are ready for standardization. I
think videos will be posted eventually, so we'll be able to see his
exact words.

Steven Galbraith's slides are available here on his web page.  I don't see anything in the slides about skepticism or standardization -- maybe it's in the video.  However, I did see this (but not until after sending my earlier message, I swear!):

I recommend you to read Chris Peikert’s blog post “What does GCHQ’s cautionary tale mean for lattice cryptography?” http://web.eecs.umich.edu/∼cpeikert/soliloquy.html

My final conclusion is that worst-case security reductions are really important in lattice cryptography, where there is so much rope to hang oneself with (i.e., flexibility in generating random instances).

Moral: Having a worst-case reduction gives more security.

(Later, in the context of chosen-ciphertext attacks, he writes that search-decision reductions yield "less" security.  I disagree: a system not designed to withstand CCA should be assumed completely insecure against such an attack, without good reason to think otherwise.  And there's nothing less secure than completely insecure.)

Christopher J Peikert

unread,
Apr 1, 2016, 3:20:13 PM4/1/16
to cryptanalyti...@googlegroups.com
Greetings --

I've just posted a new paper ( http://web.eecs.umich.edu/~cpeikert/pubs/instantiate-rlwe.pdf , abstract below) that comprehensively reviews the known attacks on certain Ring-LWE instantiations, and gives a new exposition of how and why they work.  In all cases, the attacks succeed simply because the error distributions are insufficiently "well spread" relative to the ring; the paper contains several figures that depict this visually.

On the positive side, the paper also shows that any Ring-LWE instantiation supported by (or only "almost" supported by) the known worst-case hardness theorems is *provably immune* to all these attacks (and broad generalizations thereof).  This should allay any concern that this class of attacks poses any risk to properly instantiated Ring-LWE.

Sincerely yours in cryptography,
Chris

=======

The learning with errors over rings (Ring-LWE) problem---or
more accurately, family of problems---has emerged as a promising
foundation for cryptography due to its practical efficiency,
conjectured quantum resistance, and provable worst-case
hardness: breaking certain instantiations of Ring-LWE is at least
as hard as quantumly approximating the Shortest Vector Problem on
any ideal lattice in the ring.

Despite this hardness guarantee, several recent works have shown that
certain instantiations of Ring-LWE can be broken by relatively simple
attacks.  While the affected instantiations are not supported by
worst-case hardness theorems (and were not ever proposed for
cryptographic purposes), this state of affairs raises natural
questions about what other instantiations might be vulnerable, and in
particular whether certain classes of rings are inherently unsafe for
Ring-LWE.

This work comprehensively reviews the known attacks on Ring-LWE and
vulnerable instantiations.  We give a new, unified exposition which
reveals an elementary geometric reason why the attacks work, and
provide rigorous analysis to explain certain phenomena that were
previously only exhibited by experiments.  In all cases, the
insecurity of an instantiation is due to the fact that its error
distribution is insufficiently ``well spread'' relative to its ring.
In particular, the insecure instantiations use the so-called
non-dual form of Ring-LWE, together with spherical error
distributions that are much narrower and of a very different shape
than the ones supported by hardness proofs.

On the positive side, we show that any Ring-LWE instantiation which
satisfies (or only almost satisfies) the hypotheses of the
``worst-case hardness of search'' theorem is provably immune to
broad generalizations of the above-described attacks: the running time
divided by advantage is at least exponential in the degree of the
ring.  This holds for the ring of integers in any number field,
so the rings themselves are not the source of insecurity in the
vulnerable instantiations.  Moreover, the hypotheses of the worst-case
hardness theorem are nearly minimal ones which provide these
immunity guarantees.

Reply all
Reply to author
Forward
0 new messages