ideal-svp attacks?

2,325 views
Skip to first unread message

D. J. Bernstein

unread,
Jul 17, 2016, 11:37:33 AM7/17/16
to cryptanalyti...@googlegroups.com
Some background first, then some discussion of attacks.

2010 Lyubashevsky--Peikert--Regev (https://eprint.iacr.org/2012/230.pdf)
say that their "main contributions in this paper are to define a
ring-based variant of LWE and to prove its hardness under worst-case
assumptions on ideal lattices." They claim that "despite considerable
effort" there has been "no significant progress" against "SVP and other
problems on ideal lattices". They conclude that Ring-LWE has "very
strong hardness guarantees".

There's an important restriction here: for crypto we need decision
problems to be hard, and the search-to-decision reduction inside the
LPR10 proof requires Galois number fields (and split moduli). Actually,
the paper states this reduction specifically for cyclotomic number
fields, but it

* emphasizes the role of automorphisms in the proof and

* says that it is "likely" that the reduction "can also be somewhat
extended beyond cyclotomic number fields".

2014 Eisentraeger--Hallgren--Lauter proved this extension in Section 4
of https://eprint.iacr.org/2014/784.pdf. 2015 Peikert retroactively
wrote "Yes, for any Galois number field (identical proof)."

Of course there are also other important restrictions. The ideal-lattice
problem is assumed to be hard for quantum computers, not just today's
computers; and it allows huge approximation factors, although not huge
enough to be reached by BKZ in a reasonable time. The theorems also
assume large noise (as https://eprint.iacr.org/2016/351.pdf emphasizes)
and are very loose, so they don't end up saying anything about any
proposed real-world cryptosystems---but one can imagine using much
larger parameters where they _would_ say something (with extra work to
quantify all the unspecified polynomial parts, as illustrated by the
2^504 example recently published by Koblitz and Menezes).

The nightmare scenario for this "very strong hardness guarantee"---and
for all the papers that have cited it---is that someone breaks the
ideal-lattice problem that LPR10 assumed to be hard. Here are two
different levels of what "break" could mean:

* Mini-nightmare: Faster attacks against concrete parameters. This
has already been going on for years, with continued improvements in
general lattice attacks. On the other hand, (1) people hope that
a somewhat larger dimension is enough security margin, and (2) the
theorems from LPR10 etc. are actually stated as purely asymptotic
results about polynomial reductions, so even a 100x improvement in
the dimension wouldn't undermine those theorems.

* Full-blown poly-nightmare: Polynomial-time attacks. If the
polynomial is even close to reasonable then this would eliminate
any hope for concrete theorems about specific parameters.

More importantly, this would destroy the allegedly "very strong
hardness guarantees" from [LPR10]. The core argument for selecting
Ring-LWE as a lattice problem wouldn't merely be a weak asymptotic
argument saying nothing about any real-world cryptosystem; it would
be a _vacuous_ argument, an argument that had been _obliterated_ on
its own terms.

I'm not the only person to have pointed out that a very easy way to
obtain what's deceptively called "provable security"---a proof that
cryptosystem C is as hard to break as problem P---is to choose a weak
problem P. The full-blown poly-nightmare scenario would show that
Ring-LWE has exactly this vacuous type of "provable security". Sure,
Ring-LWE could still be secure even if the "hard" lattice problem is
broken, but this would be measured the right way---by cryptanalysis---
without the meaningless proofs.

Sometimes people believe that "provable security" serves as some sort of
magical boundary stopping attacks. As confirmation of this belief, they
often point to a sqrt(n) limit in Regev's LWE theorem matching a sqrt(n)
limit in the Ding/Arora--Ge attack. But a closer look shows that the
Regev boundary is quite far from the Ding/Arora--Ge boundary; see
Kirchner's message dated 2 Mar 2015 16:04:41 -0800. When there are many
theorems with quantitative limits and many attacks with quantitative
limits, and one insists on simplifying all the limits down to crude
"sqrt(n)" summaries, _of course_ there will be meaningless collisions in
these summaries. Meanwhile there are many examples of the belief being
severely contradicted by attacks against the underlying problem: e.g.,
Shor's algorithm smashing various "provably secure" constructions. The
poly-nightmare scenario for Ring-LWE would be another example of this.

With this background in mind, I happened to be glancing at page 22 of
https://eprint.iacr.org/2016/127.pdf (Albrecht--Bai--Ducas) and was
surprised to see an offhand comment that "the reduction of [LPR10] is
vacuous" for some number fields---"worst-case Ideal-SVP" is "solvable in
quantum polynomial time for approximation factors 2^Ohat(sqrt(n))".

Do the authors not realize how serious a statement this is? Calling the
LPR10 reduction "vacuous"---even if only for _some_ number fields---is a
shocking assault on the entire Ring-LWE religion! Why do they bury this
statement at the end of their paper?

The particular number fields identified are maximal real subfields of
cyclotomic fields, i.e., fields of the form Q(zeta_n+1/zeta_n). These
fields are Galois, so they fit the Ring-LWE theorems if noise widths
etc. are chosen large enough. I don't see anything in the previous
Ring-LWE literature, or the ideal-lattice literature more generally,
that recommends against these fields, except for the NTRU Prime paper
and my earlier postings on the same topic.

Albrecht--Bai--Ducas actually choose n to be a "safe prime" (to avoid
proper subfields other than \Q), but this doesn't seem relevant to the
attack. They do say that it's important that the class number is small,
i.e., that ideals have a good chance of being principal. Of course, the
2014 Campbell--Groves--Shepherd paper claimed to be able to efficiently
find short generators of principal ideals in cyclotomic fields:

* First find _some_ generator in quantum poly time. Biasse pointed
out serious flaws in the claimed algorithm, and CGS didn't maintain
their claim after that, but there seems to be consensus that this
step is rescued by an algorithm of Biasse and Song. This doesn't
depend on the choice of number field.

* Then reduce the generator to a short generator. CGS suggested LLL
using a well-known basis for "cyclotomic units", and there's
consensus that this works well (not necessarily with probability
1---"cyclotomic units" aren't necessarily all units). This adapts
easily to Q(zeta_n+1/zeta_n); see, e.g., the NTRU Prime paper.

My understanding is that Cramer--Ducas--Peikert--Regev computed how
short the result is, and Albrecht--Bai--Ducas noticed that this is short
enough to conclude that "the reduction of [LPR10] is vacuous". I haven't
checked the details here, but this seems plausible.

Does the same attack work against arbitrary number fields? No: there's
the problem of finding a good basis for the unit group. All of the fast
algorithms known for this classic problem are for extremely special
fields with small Galois groups. See the NTRU Prime paper for further
comments and references.

Does the same attack work against cyclotomics Q(zeta_n)? These have
minimum-size Galois groups---i.e., they're Galois---but they're also
"CM", forcing them to have very large class numbers, i.e., a very small
chance of ideals being principal. For comparison, Q(zeta_n+1/zeta_n) is
like most number fields in having small class number.

By highlighting the role of the class number in this attack, ABD
implicitly suggest that cyclotomics are immune. I'm skeptical. There are
many easy ways to convert any ideal I into a related principal ideal
J---for example, take J = I^h where h is the class number---so to attack
cyclotomics it's enough to find one of these ways that allows a short
element of I to be recovered from a short element of J.

Say we define J as the norm of I from Q(zeta_n) to Q(zeta_n+1/zeta_n).
Then J has a good chance of being principal; assume that it is. Find a
short element of J as above. Is this a short enough element of I? At
first glance it seems to be---a constant factor 2 in the exponent seems
to be absorbed by an O() that's already in the exponent.

Obvious caveats: There are many steps here, and one bad step could stop
the whole attack from working. Verification isn't easy. Many steps
aren't implemented, especially the quantum ones. :-) But will any of the
Ring-LWE proponents now stand up and claim confidence in the hardness of
the cyclotomic-ideal-lattice problems that LPR10 assumed to be hard?
Will they put money on it?

---Dan

Christopher J Peikert

unread,
Jul 18, 2016, 12:39:52 PM7/18/16
to cryptanalytic-algorithms
Once again, a simple misunderstanding of the lattice-crypto literature has led Dan to subject us to a morass of mischaracterization.  (See, e.g., this thread, or this one, or this one.)

In this case, the confusion relates to an issue so basic and fundamental to lattice cryptography, which is dealt with in literally hundreds of papers, that I almost can't believe he's unaware of it -- yet its total omission from his post leads me to believe that's indeed the case.  (The alternative would be even worse: awareness but deliberate deception.)  In any case, the issue is: (Ring-)LWE error rates \alpha and their corresponding worst-case approximation factors.

In short, Dan mischaracterizes Albrecht-Bai-Ducas as claiming "the LPR'10 reduction is vacuous for some number fields."  The reality is that the reduction is (or at least might be) vacuous in some number fields for extremely small, cryptographically irrelevant error rates \alpha = 1/2^{\Theta(\sqrt{n \log n})}.

To my knowledge, such minuscule error rates have never come close to being needed in the history of lattice cryptography.  For comparison, typical (Ring-)LWE cryptosystems use vastly larger, inverse-polynomial error rates, e.g., \alpha = 1/\tilde{O}(n).  Worst-case hardness theorems remain highly non-vacuous in this regime.

Details follow.  We'll start with standard background on error rates and approximation factors, then see how it all relates to Dan's post.  Because the discussion concerns the (alleged) vacuity of worst-case hardness theorems, I will mostly stick to asymptotics.

=== BEGIN BACKGROUND ===

(The following material is all completely standard, and is covered in the original papers along with any modern survey or university course on lattice crypto.)

(Ring-)LWE is a family of average-case problems parameterized by a dimension (or ring of degree) n, a modulus q, and an error distribution (often a Gaussian of some kind).  An important quantity is the "error rate" \alpha, which is essentially the "width" of the error distribution divided by q.

Several works prove "worst-case hardness" theorems for (Ring-)LWE, which say (informally): for instantiations satisfying certain criteria, one cannot solve (Ring-)LWE with any noticeable probability, unless one can also approximate certain lattice problems on all instances.  (In some cases, the latter can use quantum computation.)  By contrapositive, if the approximation problem has any hard instances, then almost all instances of the (Ring-)LWE instantiation are hard.

In these hardness theorems, the worst-case approximation factor is the inverse error rate 1/\alpha, times a small (roughly linear) polynomial in n.

Most cryptographic constructions based on (Ring-)LWE -- and certainly all "basic" ones like key exchange or digital signatures -- use a (small) polynomial 1/\alpha, and are therefore based on small polynomial approximation factors.  For example, http://eprint.iacr.org/2014/070 is ultimately based on \tilde{O}(n^{5/2}) approx factors (see Section 4.4).

More "exotic" constructions like FHE sometimes use quasi-polynomial 1/\alpha = n^{O(\log n)}, and are therefore based on quasi-polynomial approx factors.  A few rare theoretical applications may need to rely on 2^{n^\varepsilon} factors for an arbitrarily small constant \varepsilon > 0, but nothing beyond that.

Why aren't larger factors used?  It's because LLL and its descendants can efficiently approximate the relevant worst-case lattice problems to within 2^n factors, or even a bit better.  So if one were to instantiate (Ring-)LWE with an extremely small error rate like \alpha = 2^{-n}, then the worst-case hardness theorems would be vacuous for that instantiation.  In fact, we can say more: (Ring-)LWE with such small \alpha is indeed easy to solve using LLL etc.  Fortunately, as already noted, cryptographic applications can always rely on much larger error rates, where the best known attacks on the underlying approximation factors require exponential time (or nearly so).

=== END BACKGROUND ===

Now back to Dan's post.  The main problem is the description of Albrecht-Bai-Ducas:

With this background in mind, I happened to be glancing at page 22 of
https://eprint.iacr.org/2016/127.pdf (Albrecht--Bai--Ducas) and was
surprised to see an offhand comment that "the reduction of [LPR10] is
vacuous" for some number fields---"worst-case Ideal-SVP" is "solvable in
quantum polynomial time for approximation factors 2^Ohat(sqrt(n))".

Dan's highly selective quotation/description

"the reduction of [LPR10] is vacuous" for some number fields

is a major misrepresentation of what ABD actually wrote, which is this (emphasis added, bound made more precise):

[Ideal-SVP is solvable] in quantum polynomial time for approximation factors 2^{O(\sqrt{n \log n})} as proved in [CDPR16,BS16]: the reduction of [LPR10] is vacuous for such parameters.

So no, ABD do not say that the reduction is vacuous for some number fields.  As we have seen, "for such parameters" means for extremely small error rates \alpha = 1/2^{O(\sqrt{n \log n})} (NB: the hidden constant is substantial).  Clearly, ignoring these three words is hugely significant: such error rates are much smaller than anything ever used in lattice crypto, both asymptotically and concretely for relevant choices of n.  In other words, the vacuity applies only to an extreme, not-even-close-to-being-used corner of the parameter space.

(Technical point: the CDPR theorem on Ideal-SVP is stated for principal ideals in any prime-power cyclotomic, not their maximal totally real subfields that Dan refers to, which are often principal ideal domains.  It should be carefully checked whether the theorem generalizes to the latter case, but it wouldn't be surprising if it did.)

A natural question, of course, is whether the Ideal-SVP approximation factor can be improved to something closer to what crypto actually relies upon.  Section 6 of CDPR'16 shows a fundamental barrier for "log unit"-style algorithms (e.g., CGS, CDPR) that return a short generator of a given principal ideal: even the shortest generator can be a 2^{\tilde{\Omega}(\sqrt{n})} factor longer than the shortest vector, in the worst case.  So at least for this class of algorithms, the 2^{O(\sqrt{n \log n})} approximation factor for Ideal-SVP is the best possible, up to something like a \log^{3/2} n factor in the exponent.

Do the authors not realize how serious a statement this is? Calling the
LPR10 reduction "vacuous"---even if only for _some_ number fields---is 
shocking assault on the entire Ring-LWE religion!

Again, this is not what they say.

My understanding is that Cramer--Ducas--Peikert--Regev computed how
short the result is, and Albrecht--Bai--Ducas noticed that this is short
enough to conclude that "the reduction of [LPR10] is vacuous".

And again.
 
I haven't checked the details here, but this seems plausible.

Perhaps check the details next time.  (Or just ask the authors.)

That's all for ABD, but there are a few other loose remarks crying out for correction.

Of course there are also other important restrictions. The ideal-lattice
problem allows huge approximation factors, although not huge

enough to be reached by BKZ in a reasonable time.

As already noted, the ideal-lattice problems underlying crypto usually involve small polynomial approximation factors -- I don't know of any expert who considers this "huge."  By contrast, BKZ only gives actually-huge approximation factors: 2^{O(n (\log \log n)^2 / \log n)} in polynomial time, or roughly 2^{\tilde{O}(n/k)} in 2^k time.

By highlighting the role of the class number in this attack, ABD
implicitly suggest that cyclotomics are immune. I'm skeptical... 
Say we define J as the norm of I from Q(zeta_n) to Q(zeta_n+1/zeta_n).
Then J has a good chance of being principal; assume that it is. Find a
short element of J as above. Is this a short enough element of I? At
first glance it seems to be---a constant factor 2 in the exponent seems
to be absorbed by an O() that's already in the exponent.

This very algorithm has been considered at various times over the years by at least Craig Gentry, Leo Ducas, me, and probably many others I haven't discussed it with.  There's no writeup (until now, I guess), because a cursory analysis beyond "at first glance" shows that it suffers unboundedly bad approximation factors in the worst case.

Proof: any ideal H in one of the above two fields has minimum distance roughly N(H)^{1/d} (up to small \text{poly}(d) factors), where d is the field degree (over \mathbb{Q}).  Therefore, even the shortest element of J has length N(I)^{2/n}, so it is at best an N(I)^{1/n}-approximate shortest element of I.  If, say, N(I) = 2^{n^2}, then we only get a 2^n approximation (worse than LLL).  QED.

There are of course a million other things one might try to avoid the huge class number, but this one doesn't work.
 
Sometimes people believe that "provable security" serves as some sort of
magical boundary stopping attacks. As confirmation of this belief, they
often point to a sqrt(n) limit in Regev's LWE theorem matching a sqrt(n)
limit in the Ding/Arora--Ge attack. But a closer look shows that the
Regev boundary is quite far from the Ding/Arora--Ge boundary; see
Kirchner's message dated 2 Mar 2015 16:04:41 -0800.

Regev's lower bound of 2\sqrt{n} on the Gaussian parameter causes Arora-Ge to take at least 2^n time.  In addition, Arora-Ge runs in subexponential 2^{o(n)} time for error o(\sqrt{n}).  How is this evidence against Regev's bound acting as a "boundary stopping attacks?"

Kirchner points out that Arora-Ge also takes exponential 2^{cn} time for error c' \sqrt{n}, where we may take c' \ll 1.  But the constant c shrinks with c', so one cannot make c' too small without losing the conjectured 2^{n/2}-hardness of the relevant worst-case lattice problems.  Finding the constant where the cutoff occurs would take careful analysis and optimization, but it's likely greater than, say, 1/3.  In the land of \sqrt{n}, a factor of 6 is not "quite far."

But will any of the
Ring-LWE proponents now stand up and claim confidence in the hardness of
the cyclotomic-ideal-lattice problems that LPR10 assumed to be hard?

Yes, I will.  As noted, LPR'10 only needs quantum hardness of small-poly(n)-approximate Ideal-SVP in a desired cyclotomic -- or in any desired number field, for the hardness of search-Ring-LWE.  Nothing in the literature comes within the same solar system of achieving such approximation factors.
 
Will they put money on it?

Yes, I will, if the terms of the wager will recognize a difference between small polynomials and 2^{O(\sqrt{n \log n})}.  Please write me privately if you wish to arrange them.

Sincerely yours in cryptography,
Chris

D. J. Bernstein

unread,
Jul 18, 2016, 8:13:21 PM7/18/16
to cryptanalyti...@googlegroups.com
Here's the complete paragraph in question, including the beginning part
that Chris omitted:

But working with K = Q(zeta_p + bar zeta_p) has a drawback: the class
number h(K) = h_p^+ seems quite small (see [Was97, Table 4 pp. 421]),
and this makes the worst-case Ideal-SVP problem solvable in quantum
polynomial time for approximation factors 2^(O~(sqrt(n))) as proved
in [CDPR16,BS16]: the reduction of [LPR10] is vacuous for such
parameters.

My reading of "such parameters" is "K = Q(zeta_p + bar zeta_p)". Chris's
reading of "such parameters" is "approximation factors 2^(O~(sqrt(n)))",
which Chris asserts are

* "cryptographically irrelevant" and

* "much smaller than anything ever used in lattice crypto, both
asymptotically and concretely for relevant choices of n" (note that
the "concrete" part of this is nonsense---big-O formulas do _not_
state numerical values for any particular choices of n)

" "an extreme, not-even-close-to-being-used corner of the parameter
space".

But then how does Chris explain the authors referring to something
"cryptographically irrelevant" as a "drawback" of working with such
choices of K?

Christopher J Peikert writes:
> In the land of \sqrt{n}, a factor of 6 is not "quite far."

Yes, it is, because it means that specifying the limits more precisely
than sqrt(n) would destroy the allegedly meaningful match between the
Regev limit and the Ding/Arora--Ge limit. As I wrote before:

When there are many theorems with quantitative limits and many
attacks with quantitative limits, and one insists on simplifying all
the limits down to crude "sqrt(n)" summaries, _of course_ there will
be meaningless collisions in these summaries. Meanwhile there are
many examples of the belief being severely contradicted by attacks
against the underlying problem: e.g., Shor's algorithm smashing
various "provably secure" constructions. The poly-nightmare scenario
for Ring-LWE would be another example of this.

Kirchner also pointed out that even the sqrt(n) match breaks down when
q drops far below n. He also pointed out that Regev's LWE>=BDD reduction
_doesn't_ have the sqrt(n) limit; the limit is in a separate BDD>=DGS
reduction, making it even more remote from LWE attacks.

I've never seen anyone explain how this sqrt(n) match is supposed to be
anything more than a very rough numerical coincidence. Surely everyone
agrees that _meaningless_ matches should happen at this level of detail,
so why is this particular match supposed to be meaningful?

> Will they put money on it?
> Yes, I will

Great. I'm understanding you to be allowing "any desired number field"
as a target---or do you insist on the number field being Galois? Will
you admit that a <2^80 attack with approximation ratio <=1000 against a
field of degree >=1024 constitutes a break, or do you insist on having a
subexponential-time asymptotic for polynomial approximation ratios for
an infinite sequence of fields?

---Dan

Leo Ducas

unread,
Jul 19, 2016, 3:33:58 AM7/19/16
to Cryptanalytic algorithms, d...@cr.yp.to
Hi Dan, Hi Chris,

if I may comment on my own paper:


My reading of "such parameters" is "K = Q(zeta_p + bar zeta_p)". Chris's
reading of "such parameters" is "approximation factors 2^(O~(sqrt(n)))",

I'm assuming that Chris already understood that ``such parameters'' meant both the number field and the approximation factor: the sentence mention both.
 
which Chris asserts are

   * "cryptographically irrelevant" and
 

   * "much smaller than anything ever used in lattice crypto, [...]

   " "an extreme, not-even-close-to-being-used corner of the parameter
     space".

But then how does Chris explain the authors referring to something
"cryptographically irrelevant" as a "drawback" of working with such
choices of K?

It is cryptographically irrelevant in my view too, with the exception of tentative multilinear maps/io construction (is io cryptographically relevant ? maybe it will in 50 years ?).

My best guess is that it is not very safe to play with such approximation factor whatever the ring, but I only have argument for those precise parameters so far.

In the case of other rings, I am afraid that a non-uniform quantum attacks would lead to the same results as for cyclotomics. But we already had that argument in the NTRUprime thread:
- Dan dismissed non-uniform attacks making analogies between algebraic number theory with cryptanalysis of hash function
- Chris pointed out that the X^p - X - 1 is far from being generic
- Dan authoritatively dismissed Chris
 
As I intend to continue my work on cryptanalysis of lattices both in the generic and the algebraic settings, it would be inappropriate for me to claim that algebraic attacks will not exists below approximation factors 2^(O~(sqrt(n))) . What I can say is that there is a serious technical barrier at this value in the cyclotomic case.


Great. I'm understanding you to be allowing "any desired number field"
as a target---or do you insist on the number field being Galois? Will
you admit that a <2^80 attack with approximation ratio <=1000 against a
field of degree >=1024 constitutes a break,

This wouldn't really qualify as a surprise: rhf 1.007 is typically costed at 2^80 per SVP call, 1.007^1024 = 1265.2 . Already, the best known attack is at 2^90.

Last, a personal digression:

I view tend to NTRU as an ``homogenic Ring-LWE'' (a very historically inaccurate formulation, but technically helpful). independently of worst-case hardness proofs, I view there a hierarchy of assumptions (What follows is not a formal argument)

Ideal-SVP < NTRU < RIng-LWE < Module-LWE < LWE

This ordering is driven by the rank of the homogenic problem over the base ring. Ideal-SVP problem has rank 1, NTRU rank 2, RIng-LWE rank 3, Module-LWE a small degree d<n, and LWE full rank d=n.

On can interpret the above hierarchy in the light of commutativity. Indeed, the NTRU problem can for example be seen as a Principal Ideal Problem in the ring of 2x2 matrices of some ring R. So it seems to me that moving to the right of the above
 hierarchy is the best way to eliminate the most worrysome algebraic structure : commutativity.

The log-unit attack of CGS and its proofs BS,CDPR exploit advanced algebraic features of the ring, but is limited the (principal)-ideal-SVP (degree 1).
Exploiting such algebraic features when the degree is greater than 1 is what we tried to present as the open problem in the ABD paper.
As for some parameters (large approx factors, cyclotomics) there is currently a gap between the best known attack on Ring-LWE and on Ideal-SVP, the reductionist arguments got a bit awkward. In my current understanding, the hardness
of NTRU/Ring-LWE at least partially stems from the fact that it has degree >1 over the ring.

The ABD paper relies on the homogenic equation to be solved  being of rank 2 over the ring, and is relatively less powerful than the previous  (subfields are needed so that anything can be done). It fails at rank 3 as far as we tried.

There is a clear trend... My personal conclusion is that if one is afraid of algebraic structure inside Ring-LWE, well, switching the ring is one idea, but it is also obvious that one should move in the direction of LWE, not in the opposite direction: Module-LWE is a much safer choice than NTRU.

I believe, as we wrote that it is not so absurd to have fallback options (pure LWE, Module-LWE, non cyclotomic ring, ...), in case we start to see cyclotomic Ring-LWE shaking a bit. While we have recently achieved new results on some algebraic lattices, I still see several serious obstacle toward devastating attacks on cyclotomic Ring-LWE.

There are of course many fascinating open question with serious implication if not answered properly in the years to come. It seems to me that standing back on ones chair making bets on what is going to be discovered or scaremongering online by truncating quotes is an irresponsible attitude, and that time would be wiser spend actually doing the research or teaching the necessary material for this research to be done.

-- Leo

Christopher J Peikert

unread,
Jul 19, 2016, 3:39:58 AM7/19/16
to cryptanalytic-algorithms
Dan, when you find yourself in a hole, it's best to stop digging.

As far as I can tell, Dan has not disputed any of following central points from my post:

* (Ring-)LWE error rates and their underlying worst-case approximation factors are directly linked.  Smaller error rates correspond to larger approx factors, for which the relevant lattice problem can only become easier.

* Basic (Ring-)LWE-based cryptosystems rely on small polynomial approximation factors, for which the worst-case hardness theorems are far from vacuous, because the fastest known (quantum) algorithms take time exponential in the dimension.

* Those approximation factors (error rates) that do make the worst-case hardness theorems vacuous are extremely large (small), and have never needed to be used in cryptosystems.

* There is a 2^\tilde{\Omega}(\sqrt{n}) approximation barrier for any Ideal-SVP algorithm that returns a generator of a given principal ideal (in any prime-power cyclotomic).

* The following attempt suffers from unboundedly bad approx factors: "Define J as the norm of I... Find a short element of J as above. Is this a short enough element of I? At first glance it seems to be..."

Regarding the interpretation of ABD, there are just a few loose ends:

   But working with K = Q(zeta_p + bar zeta_p) has a drawback: the class
   number h(K) = h_p^+ seems quite small (see [Was97, Table 4 pp. 421]),
   and this makes the worst-case Ideal-SVP problem solvable in quantum
   polynomial time for approximation factors 2^(O~(sqrt(n))) as proved
   in [CDPR16,BS16]: the reduction of [LPR10] is vacuous for such
   parameters.

My reading of "such parameters" is "K = Q(zeta_p + bar zeta_p)". Chris's
reading of "such parameters" is "approximation factors 2^(O~(sqrt(n)))",

No, my reading of "such parameters" includes both the choice of field and the approximation factors, as my original post makes abundantly clear.

If your reading omits the approximation factors, then well, it's just wrong.  Approx factors and error rates are absolutely essential to (Ring-)LWE's worst-case hardness theorems.

(I now see that ABD's D has concurrently voiced his support for my interpretation of "such factors.")
 
which Chris asserts are

   * "cryptographically irrelevant" and

Indeed, because they are.
 
   * "much smaller than anything ever used in lattice crypto, both
     asymptotically and concretely for relevant choices of n" (note that
     the "concrete" part of this is nonsense---big-O formulas do _not_
     state numerical values for any particular choices of n)

Not nonsense: one can extract from the proof in [Section 6, CDPR'16] a concrete hidden constant in the 2^{\Theta(\sqrt{n \log n})} approx factor: it's at least 2 (and probably a fair bit larger, but I'll just use 2 here).  Similarly, one can track down the hidden logs in the small polynomial approx factors underlying crypto constructions.

For relevant choices of n, the ratio of the subexponential factor to the polynomial one is well above 2^{80}.  So yes, the minuscule error rates are indeed concretely much smaller than what's used in lattice crypto.
 
   " "an extreme, not-even-close-to-being-used corner of the parameter
     space".

But then how does Chris explain the authors referring to something
"cryptographically irrelevant" as a "drawback" of working with such
choices of K?

Good grief, is this really what's left of your argument? It's not my job to explain ABD's choice of wording.

>     Will they put money on it?
> Yes, I will

Great. I'm understanding you to be allowing "any desired number field"
as a target---or do you insist on the number field being Galois? Will
you admit that a <2^80 attack with approximation ratio <=1000 against a
field of degree >=1024 constitutes a break, or do you insist on having a
subexponential-time asymptotic for polynomial approximation ratios for
an infinite sequence of fields?

As I wrote, we can discuss terms privately.  (I'm happy to announce them publicly if we find mutually agreeable ones.)  But recall that your original challenge referred to "the cyclotomic-ideal-lattice problems that LPR10 assumed to be hard;" those conjectures are asymptotic and involve small polynomial approximation factors.
 
The remainder of Dan's post concerns the separate issue (re-raised in a background paragraph of Dan's original message) of the Gaussian error parameter required by worst-case hardness theorems.  I'll reply to that separately.

D. J. Bernstein

unread,
Jul 23, 2016, 6:22:41 PM7/23/16
to cryptanalyti...@googlegroups.com
Are you a new researcher working on cryptanalytic algorithms---for
example, algorithms to attack real-world lattice-based cryptosystems?

You should be aware that lattice-based cryptography has surrounded
itself with various security myths. If you fall prey to these myths then
you'll miss most of the exciting possibilities for attacks against these
cryptosystems.

The most important myth is that there's a theorem guaranteeing that

* you cannot violate the security targets for serious lattice-based
proposals such as New Hope

* unless you can advance the state of the art in attacking a problem
of finding short nonzero vectors in worst-case ideals.

If you believe this myth then your only hope for breaking these systems
is to focus on this short-nonzero-vector-in-an-ideal problem. There is
then a further myth saying that

* short-nonzero-vector algorithms for worst-case ideals have been
studied by many previous researchers,

* with essentially no improvements compared to short-nonzero-vector
algorithms for worst-case general lattices (just minor stuff such
as working with less data and working modulo symmetries),

so---unless you think you're smarter than all those other people---your
only hope is to focus on short-nonzero-vector problems for worst-case
general lattices. There's then myth #3 saying that

* short-nonzero-vector algorithms for worst-case general lattices
have been studied by many previous researchers,

* with a stable security story (maybe some improvements but nothing
really worth reporting),

so---again, unless you think you're smarter than all those other
people---you can't hope for any serious breakthroughs.

I think that all of these myths can be adequately explained by
sloppiness. I don't think the people pushing these myths are trying to
_maliciously_ mislead users into deploying risky new systems that have
barely been analyzed; I think they simply don't understand the risks. I
also don't think that these people are trying to maliciously (and
counterproductively) deter security analysis---but if you believe the
myths then that's exactly what will happen.

To be an effective attacker you need to be skeptical about security.
Have all the potential weaknesses been explored? How thoroughly have
they been explored? If someone claims that a problem has been studied by
many previous researchers with limited progress, are they giving you a
pointer to a survey on the topic? What attack avenues did the previous
papers actually try exploring, and what did they omit? If someone claims
a security theorem, then what exactly was done to minimize the risks of

* errors in the proofs, and
* hardness hypotheses being flat-out wrong, and
* security conclusions failing to cover attacks that users care about?

If someone claims that a theorem includes a cryptosystem as a special
case, then where are all the details of this specialization spelled out?

Regarding myth #3, the allegedly thorough study of SVP etc.: Algorithms
for the classic lattice problems are actually studied by a rather small
community, so it's unsurprising to see big new advances in attacks. One
can see this even through the oversimplifying lens of asymptotic
algorithm analysis:

* algorithms known today to find the _shortest_ nonzero vector in a
general d-dimensional lattice have asymptotic cost exponent around
0.29d (in what seems to be the worst case, assuming conjectures
that are backed by moderately convincing experiments), whereas

* just a few years ago nothing was known better than 0.41d.

This is a huge change in asymptotic security levels. This also affects
the asymptotic cost of more general algorithms to find _short_ nonzero
vectors. There's much more to say about unexplored avenues of attacks,
especially in the non-asymptotic real-world situation. Don't be deterred
by the myth that SVP has been thoroughly explored!

Regarding myth #2, the allegedly thorough study of Ideal-SVP etc.: Until
a few years ago it _sounded_ plausible to guess that the classic lattice
problems were almost as hard for ideals as for general lattices---but it
was never true that there were many people studying this. Recent work on
finding short generators in the cyclotomic case---in particular,

* quantum poly-time key recovery for the first FHE proposal (what
Gentry presented in "Fully homomorphic encryption using ideal
lattices" at STOC 2009, not known to be breakable otherwise) and

* quantum poly-time key recovery for the first multilinear-map
proposal (Garg--Gentry--Halevi, subsequently shown to have other
weaknesses beyond key recovery)

---is serious progress in cryptanalysis of ideal-lattice-based
cryptography _and_ in the problem of finding short elements of ideals.
There's again a lot to say about unexplored avenues of attacks.

For the rest of this message I'll focus on myth #1. Apparently there
_is_ a theorem (perhaps with a correct proof---I haven't checked) saying
something like

* you cannot find a <2^100 attack with more than a one-in-a-million
success chance against Giant New Hope

* unless you can advance the state of the art in attacking a problem
of finding short nonzero vectors in worst-case ideals.

But do you see the "Giant New Hope" part there? This is _not_ the system
advertised in the New Hope paper. It's a system that looks very similar
but uses MUCH MORE BANDWIDTH, because it replaces the modulus x^1024+1
with something vastly larger. Nobody has actually written down the
details of Giant New Hope, since it's hard to imagine that anyone would
be interested in a lattice-based system that uses so much bandwidth.

Myth #1 is that the literature contains this theorem with New Hope in
place of Giant New Hope. But equating New Hope with Giant New Hope is
like equating RSA-1024 with (wild guess here) RSA-1048576. Why should
the smaller system be as secure as the larger system? This certainly has
not been proven, and even an extreme attack (such as Shor breaking RSA
in essentially quadratic time) wouldn't make it true.

Myth #1 isn't like myth #2 and myth #3, overstatements of the amount of
cryptanalytic attention. Myth #1 is an outright misstatement of what has
been proven. To see that this difficulty exists, all you have to do is
read what the worst-case-to-average-case reduction theorems actually
say, and then try applying the theorems to specific cryptosystems such
as New Hope. You'll immediately bump into obstacles such as "o()" and
"O()" and "O~()", which _by definition_ are purely asymptotic---plugging
in any specific size is nonsense.

If you challenge the people pushing myth #1 to show you a non-vacuous
worst-case-to-average-case theorem for a specific cryptosystem, they'll
say things like "one can extract constants from the proof" and "one can
track down the hidden logs". Well, yes, it's plausible that this _can_
be done, but why didn't they do it? Why aren't they stating the results
as a theorem with all of the constants shown, so that skeptics can

* see whether the proof actually justifies the claimed constants, and
* see what those constants mean for the alleged application, and
* see how known attacks compare to those constants?

The people pushing the myth will claim that the issue here is "small"
but will constantly dodge _quantifiying_ this. They won't even tell you
which specific lattice dimensions they're talking about. There's a basic
failure here to comply with the scientific principle of falsifiability.

There are also "tightness" failures in several other areas of "provable
security". Koblitz and Menezes have written surveys of this issue---

https://eprint.iacr.org/2006/229.pdf
http://cacr.uwaterloo.ca/~ajmeneze/anotherlook/papers/tightness.pdf
https://www.youtube.com/watch?v=l56ORg5xXkk

---and some of these examples turned out to be the nightmare scenario,
where the tightness gaps turned out to be implied by serious attacks.
There are also many examples where tightness gaps don't seem to relate
to actual attacks, but there's certainly a risk, and as an attacker you
shouldn't be fooled by people claiming that there's no risk.

Chatterjee, Koblitz, Menezes, and Sarkar recently did quite a bit of
work to dig through Regev's worst-case-to-average-case reduction for LWE
and formulate a quantitative version of it:

http://cacr.uwaterloo.ca/~ajmeneze/anotherlook/papers/WCACanalysis.pdf

In Section 6 of

https://eprint.iacr.org/2016/360.pdf

they worked through what this means for dimension-1024 LWE (under quite
charitable assumptions), and concluded that the theorem "provides no
assurances whatsoever" regarding the security of this problem. As
mentioned in the NTRU Prime paper (independent, less detailed, work),
there also doesn't seem to be any worst-case-to-average-case reduction
saying anything about dimension-1024 Ring-LWE.

It's reasonably clear that pumping up the dimension enough---producing
Giant New Hope---would restore a worst-case-to-average-case reduction
despite these quantitative difficulties. It's also easy to imagine that
putting more work into reducing the looseness of the proof would allow a
somewhat smaller dimension. But the people who claim that these
reductions are valuable also don't seem interested in doing this work.

The reality today is that there's a complete separation between

* serious proposals for lattice-based cryptosystems (e.g., New Hope)
and

* cryptosystems with worst-case-to-average-case reductions that
aren't known to be vacuous (e.g., Giant New Hope).

Myth #1 refuses to acknowledge this. The pushers of the myth keep trying
to fool you into believing that there are worst-case-to-average-case
reductions for the serious proposals---as if the differences couldn't
possibly matter. Of course, whenever a lattice-based cryptosystem is
broken, the blatant weaseling begins:

* If the system wasn't in the reduction corner, then the pushers will
emphasize some dividing line between the reduction corner and the
broken system, and they'll say how foolish it is to even _consider_
a system crossing this line. Meanwhile they'll continue to push the
myth for _other_ systems.

* If the system was in the reduction corner---i.e., the reduction is
suddenly shown to be vacuous---then the pushers will draw a new
line around the remaining systems, and say that of course _their_
favorite systems are safely on the other side of the line. They'll
try not to comment on their utter failure to predict the attack.

An attacker seeing boundaries shift in this way is a shark tasting blood
in the water.

The bottom line is that nobody has proven a worst-case-to-average-case
reduction that means anything for New Hope, or for any other serious
lattice-based proposals. Even in the unlikely event that there are no
further improvements in breaking the worst-case short-nonzero-vector
problems, it's entirely possible that these cryptosystems will be
broken. Don't let exaggerated "provable security" claims deter you from
looking for attacks!

---Dan

Christopher J Peikert

unread,
Jul 25, 2016, 2:28:32 PM7/25/16
to cryptanalytic-algorithms
Programming note: because there is much research, teaching, and travel to be done in the near future, this will be my last post for awhile.  Absence of reply does not indicate endorsement of views.
 
Are you a new researcher working on cryptanalytic algorithms---for
example, algorithms to attack real-world lattice-based cryptosystems?

If you're new to this area, you should be aware that Dan Bernstein has a long history of making serious false assertions and deep mischaracterizations about the lattice cryptography literature, along with poorly thought-through technical claims that fall apart under inspection.  When corrected on the facts, he makes no acknowledgement, nor does he retract his faulty descriptions and claims.  See, for example, here, here, here, and here.

Let's see if this time is any different.  (Spoiler: not so much.)

Dan describes three alleged myths that "lattice-based cryptography has surrounded itself with."  Notice that he does not provide a single citation or quote to substantiate any of these assertions.  This is already suspicious: if the myths are so widespread, then there should be abundant examples to cite.

Dan also refers repeatedly to the "pushers" of these myths without providing a single name or example.  Instead, he smears an entire community of researchers for the alleged sins of these nameless "pushers."

It's a certainty that those who have only a passing acquaintance with lattice crypto might have misconceptions about it.  (This goes for any field of inquiry.)  But when we actually read the literature, as we will do below, we find that Dan's myths are either:

1. straw men: no credible sources advance the alleged myths -- indeed, they say the opposite; or
2. actually true.

Let's begin.


The most important myth is that there's a theorem guaranteeing that

   * you cannot violate the security targets for serious lattice-based
     proposals such as New Hope

   * unless you can advance the state of the art in attacking a problem
     of finding short nonzero vectors in worst-case ideals.

This "Myth #1," which Dan spends most of his post refuting, is a straw man: no credible source advances or even hints at the above claim.  Indeed, the literature quite clearly opposes this alleged myth.

First, let's clarify that by "serious" Dan appears to mean "a concrete instantiation of all parameters, error distributions, etc." (e.g., n=1,024, q=10,000, etc.)  This is an unfortunate choice of wording, because there is a great deal of very important work that deals only with asymptotic parameters and security targets.  For such works that are supported by worst-case hardness theorems, the above statement "you cannot violate... unless..." is unquestionably true.

On the other hand, for almost any proposed concrete instantiation like New Hope, you will not find a credible lattice cryptographer, academic paper, blog or mailing list post, etc. that would ever advance the claim "you cannot violate... unless..."  It simply isn't true, or at least it's not believed to be true (keep reading to find out why).

Let's now see whether the literature pushes this "Myth #1" as Dan claims.  We start with the well-known 2009 survey by Micciancio and Regev in the book Post-Quantum Cryptography (editor: Dan Bernstein).  This paper was seminal in proposing a methodology for setting concrete parameters for the SIS and LWE problems, whose main claim to fame is their asymptotic worst-case hardness theorems.  At the very outset in Section 1.2, the authors write (emphasis added throughout this post):

The importance of the worst-case security guarantee is twofold. First, it assures us that attacks on the cryptographic construction are likely to be effective only for small choices of parameters and not asymptotically. In other words, it assures us that there are no fundamental flaws in the design of our cryptographic construction. In fact, in some cases, the worst-case security guarantee can even guide us in making design decisions. Second, in principle the worst-case security guarantee can help us in choosing concrete parameters for the cryptosystem, although in practice this leads to what seems like overly conservative estimates, and as we shall see later, one often sets the parameters based on the best known attacks.

This is the exact opposite of Myth #1: it says that we use worst-case hardness theorems to aid us in obtaining designs that are "structurally sound" (asymptotically), but then usually set concrete parameters so as to give a desired level of concrete security against known (or speculative) attacks.  It is self-evident that this likely would not preserve any asymptotic worst-case hardness guarantees.

== INTERLUDE ==

The practical utility of the above methodology is illustrated by the many historical examples of lattice cryptosystems and assumptions which were not supported by any worst-case hardness theorems, and which later turned out to be severely weakened or even completely broken, both concretely and asymptotically.  In particular, these systems all have some "Achilles heel" that makes them much easier to break than the (worst-case) problems the designers had hoped to capture the hardness of.  Examples include:

* Merkle-Hellman and other "knapsack" cryptosystems,
* GGH/NTRU digital signatures (both without and with "perturbations"),
* LASH hash function,
* SV/Soliloquy encryption,
* multilinear map candidates,
* overstretched NTRU and its applications.

By contrast, competent instantiations of cryptosystems having worst-case hardness theorems, like SWIFFT, BLISS, NewHope and others have so far stood up quite well.  One can reasonably argue about whether these instantiations have 128 or 100 or 80 (or whatever) bits of security, but they have certainly avoided all the attacks that the above systems fell catastrophically to.  I don't think this is a coincidence.

== END INTERLUDE ==

The Micciancio-Regev survey is not the only paper that opposes "Myth #1."  For example, the first two paragraphs of Lindner-Peikert CT-RSA'11 say:

[SIS and LWE] are provably as hard as certain lattice problems in the worst case, and appear to require time exponential in the main security parameter to solve. For practical parameters, however, the concrete hardness of the SIS and LWE problems against algorithmic attacks is still far from a settled issue. This makes it difficult to assess the actual security and efficiency of cryptographic schemes that are based on these problems.

And later, when setting concrete parameters:

We choose ... a Gaussian parameter s \geq 8Note that the theoretical worst-case reduction [Reg05] for LWE asks that s \geq 2\sqrt{n}. However, the constant factors are not tight, and here we are concerned with concrete hardness against known attacks.

If these papers are trying to push Myth #1, they're doing a horrible job of it.

We could go through more examples, but the upshot is that every credible paper will distinguish between asymptotic worst-case hardness (if it applies) and concrete security estimates against particular classes of attacks (if such are given).  Of course there are papers that conflate the two categories or otherwise overclaim, just as there are in any area of science.  These don't get much traction.

Since Dan specifically mentioned the New Hope paper, let's take a look to be sure.  A search for the words "theorem," "proof," "worst-case," "ideal" etc. reveals nothing resembling the alleged mythology: the authors write exclusively about concrete parameter choices and security estimates against known or speculative attacks -- there's not a single word offering worst-case hardness theorems as evidence for concrete security.  In fact, the only mention of "worst-case" in the entire paper is in Section 4, which says that they do not use an error distribution supported by worst-case hardness theorems.  (Nor is there any claim of worst-case hardness in this web post or this one from authors of NewHope.)

Conclusion: "Myth #1" is not a myth at all, much less one that "lattice-based cryptography has surrounded itself with."

With all this said, I personally would like to see more practice-oriented work on instantiations that do conform to the hardness theorems, or at least come closer -- especially in the error distributions.  (This is not a criticism of the above works, which are completely clear about using only concrete attacks for their security claims.)  Micciancio-Regev say that this tends to yield conservative-seeming parameters, but the cost might not be so high compared with recent instantiations.  For example, repeating the analysis of Chatterjee, Koblitz, Menezes, and Sarkar for n \approx 3000 instead of n = 1024 appears to yield meaningful hardness bounds.


Onward to "Myth #2:"

There is then a further myth saying that

   * short-nonzero-vector algorithms for worst-case ideals have been
     studied by many previous researchers,

   * with essentially no improvements compared to short-nonzero-vector
     algorithms for worst-case general lattices (just minor stuff such
     as working with less data and working modulo symmetries),

The first part is true (see below), though we certainly need much more study of these problems.  The second point is also true, with the only exception being the recent CDPR'16+BS'16 quantum algorithm for obtaining cryptographically irrelevant 2^{O(\sqrt{n \log n})} approximation factors (with significant hidden constant)  in certain rings.  It's certainly very important to investigate whether this can be pushed further into the regime of cryptographic relevance (like small \text{poly}(n) factors); note that CDPR'16 demonstrates some fundamental barriers that would need to be overcome.

Regarding previous study, these short-vector problems have naturally arisen prior to their use in cryptography (see, e.g., Henri Cohen's books), and most experts in lattice algorithms have looked seriously at them.  I've also asked experts in computational number theory for their opinions; for what it's worth, none expressed any optimism about better algorithms.

Dan's usual response to all this is "where are the papers?," to which the reply is "so far, everyone has come up empty-handed."  It is an unfortunate reality that totally failed attempts almost never get formally written or published, although ideas do get shared informally.  This is a systemic issue across all of science.  (I have some ideas about how the community could do better in this regard.)

But consider the following plausibility: the hardness of \text{poly}(n)-SVP is essentially the same for ideal lattices (in a relevant class of rings) as for general lattices of the same dimension n.  In such a universe, the demand for attack papers is impossible to satisfy, because there's no way to exploit the special ideal structure.  I am not asserting that we live in such a universe; I'm only saying that the absence of papers does not necessarily indicate lack of attention or attempts -- it might just indicate that the problems are actually very hard.

The literature is consistent with all of what I've just written.  Here is the entire subsection labelled "Security" from LPR'10:

Given the utility, flexibility, and efficiency of the ring-LWE problem, a natural question is: how plausible is the underlying assumption? All of the algebraic and algorithmic tools (including quantum computation) that we employ in our hardness reductions can also be brought to bear against SVP and other problems on ideal lattices. Yet despite considerable effort, no significant progress in attacking these problems has been made. The best known algorithms for ideal lattices perform essentially no better than their generic counterparts, both in theory and in practice. In particular, the asymptotically fastest known algorithms for obtaining an approximation to SVP on ideal lattices to within polynomial factors require time 2^{\Omega(n)}, just as in the case of general lattices [AKS01, MV10].

We also gain some confidence in the hardness of ideal lattices from the fact that they arise naturally in algebraic number theory, a deep and well-studied branch of mathematics that has been investigated reasonably thoroughly from a computational point of view (see, e.g., [Coh93]). Due to their recent application in the design of cryptographic schemes, however, it is probably still too early to say anything about their security with great confidence. Further study is certainly a very important research direction.

And here's a snippet from my Feb 2015 essay on Soliloquy and the importance of worst-case reductions:

Of course, having a reduction does not say anything about whether the underlying problem is actually hard! Just as it's possible that RSA is actually easy while factoring remains hard, it might be the case that, say, Ideal-SVP is easy (in some or all rings) while SIVP remains hard in general. Fortunately, many experts have attempted to solve the Ideal-SVP/BDD problem, including in highly “structured” rings like cyclotomics, and even using the power of quantum algorithms. Yet nobody has been able to come up with any algorithm for Ideal-SVP/BDD that significantly outperforms those for the more general SIVP/BDD problems. This state of affairs could change overnight if someone just discovers a clever new idea, but the same can be said for all problems used in cryptography—it's an inherently risky business, and we make the best bets we can.



There's then myth #3 saying that

   * short-nonzero-vector algorithms for worst-case general lattices
     have been studied by many previous researchers,

   * with a stable security story (maybe some improvements but nothing
     really worth reporting),

This is another straw man: a brief look at eprint, Crypto/Eurocrypt, STOC/FOCS/SODA and related venues over the past 16 or so years would disabuse anyone of part 2 of this "myth."  Indeed, the same community Dan accuses of fomenting the myth is responsible for a lot of the progress!

Part 1 of this "myth" is actually true.  Here is a sampling of the many prominent researchers who have made multiple deep contributions on these problems over several years, starting from the early 1980s (I'm surely missing some):

H. W. Lenstra, A. Lenstra, L. Lovasz, C.-P. Schnorr, R. Kannan, M. Ajtai, P. Nguyen, D. Stehle, D. Micciancio, O. Regev.

Dozens of others have made shorter-term but highly significant contributions.

(It might be interesting to compare this with the research communities and cryptanalytic activities surrounding, say, factoring or elliptic-curve problems.  I don't know enough about their histories to hazard a guess.)


So much for the three "myths."  Loose ends:
 
 * quantum poly-time key recovery for the first FHE proposal (what
     Gentry presented in "Fully homomorphic encryption using ideal
     lattices" at STOC 2009, not known to be breakable otherwise) and

This refers to an ad-hoc, not-worst-case-hard example of Gentry's general framework, which he repeatedly emphasizes does not require principal ideals with "unusually short" generators (which are essential to the key-recovery attack).  His Crypto'10 paper and thesis describe a version with worst-case hardness, which remains unaffected by the referenced attack.

 * quantum poly-time key recovery for the first multilinear-map
     proposal (Garg--Gentry--Halevi, subsequently shown to have other
     weaknesses beyond key recovery)

This construction is also unsupported by a worst-case hardness theorem (along with all multilinear map proposals to date).  The other serious, actually practical attacks on these systems all exploit various "Achilles heels," and don't even involve finding short vectors in lattices.

Since neither of the referenced cryptosystems are supported by worst-case hardness proofs, it is strange to list them as only concrete examples in a post that's all about the "myths" surrounding such proofs.

   * If the system was in the reduction corner---i.e., the reduction is
     suddenly shown to be vacuous

To my knowledge, "the reduction is suddenly shown to be vacuous" has never happened.  Can anyone offer an example?  Specifically: a cryptosystem whose correct security reduction needed a worst-case lattice assumption (NB: these are always asymptotic) that was later falsified.


If you challenge the people pushing myth #1

Again, "myth #1" is a straw man, and nobody is pushing it.
 
to show you a non-vacuous
worst-case-to-average-case theorem for a specific cryptosystem, they'll
say things like "one can extract constants from the proof" and "one can
track down the hidden logs".

No, they will say this -- rather, I did say this -- to refute the false claim that the 2^{O(\sqrt{n \log n})} and small-poly approximation factors cannot be compared for relevant concrete n.  See what follows for the gory details.
 
Well, yes, it's plausible that this _can_
be done, but why didn't they do it?

We did do it: in LPR'10, all logs and constants in the approximation factors are precisely accounted for.  See Theorem 4.1, Theorem 5.1 and the subsequent sentence, and Theorem 5.2.  (One wonders whether Dan ever reads the papers he comments on...)

In Section 6.2 of CDPR'16 we didn't give an explicit constant for the 2^{O(\sqrt{n \log n}) approx factor, because doing so would significantly complicate the already technical proof, and the best constant we could hope to obtain wouldn't be attractive.  The asymptotics were the main novelty.

But, for the record, let's now work some constants.  Fix m=2048 (corresponding to n=1024), use the exact values of \| z_j \| = 20.398\ldots, and -- most importantly -- be very generous to the algorithm by ignoring the lower-order terms (which only hurt the approx factor), and using best-case subgaussian norms and optimistic probabilities (e.g., 1/n instead of 1/n^2).

Even with all these huge cheats, I can't get the approximation factor below 2^{54}.  For worst-case inputs -- which are what really matter here -- and realistic probabilities, the exponent would be above 100.  The concrete worst-case approx factor underlying, e.g., the P'14 key exchange is smaller than 2^{34}.

Conclusion: no vacuity, even assuming huge speculative improvements in the CDPR'16 algorithm.  Whether such improvements (or more) can be obtained remains an important question.  And breaking the asymptotic 2^{\tilde{\Omega}(\sqrt{n})} barrier would require a major new idea beyond the current paradigm of "find the shortest generator."

The reality today is that there's a complete separation between

   * serious proposals for lattice-based cryptosystems (e.g., New Hope)
     and

   * cryptosystems with worst-case-to-average-case reductions that
     aren't known to be vacuous (e.g., Giant New Hope).

On this, at least, we can agree -- if, as above, we replace the subjective term "serious" by "instantiated with concrete parameters."  Establishing reliable concrete security estimates for concrete parameters is a thorny problem.  This is widely known and acknowledged, not mythology.

D. J. Bernstein

unread,
Aug 1, 2016, 2:30:53 AM8/1/16
to cryptanalytic-algorithms
Let's see whether, by focusing carefully on one part of the current
lattice religion, we can establish for the record that there's a
completely clear gap between

* what's scientifically justified and
* what this part of the religion says.

Once this is settled, we can then try moving on to another part and
repeating the same process.

As a starting point, I'd like to highlight a particularly obnoxious
religious crusade. The goal of the crusade is to suppress an interesting
line of attack research, specifically the attack idea introduced and
further developed in the following papers:

(1) https://eprint.iacr.org/2014/784.pdf "Weak instances of PLWE"
(Eisenträger--Hallgren--Lauter, SAC 2014)

(2) https://eprint.iacr.org/2015/106.pdf "Provably weak instances of
Ring-LWE" (Elias--Lauter--Ozman--Stange, Crypto 2015)

(3) https://eprint.iacr.org/2015/971.pdf "Attacks on Search RLWE"
(Chen--Lauter--Stange)

(4) https://eprint.iacr.org/2016/193.pdf "Vulnerable Galois RLWE
families and improved attacks" (Chen--Lauter--Stange)

The attack idea exploits q-dependent ring homomorphisms created by
low-degree factors of the polynomial f mod q. Paper #1 targeted an
extremely narrow set of cases, with many limitations, but paper #2,
paper #3, and paper #4 have been pushing the central attack idea
further, removing various combinations of limitations.

I don't mean to overstate the breadth of the attacks so far---none of
them are things you'd run into by accident---but I doubt that this is
the end of the story of how an attacker can exploit these extra ring
homomorphisms. When I say "extra" I'm comparing to the case that gives
the fewest homomorphisms to the attacker, namely irreducible f mod q.
NTRU Prime, despite mainly being known for choosing f with a large
Galois group, also chooses an "inert" q, i.e., irreducible f mod q. Of
course, taking homomorphisms away from the attacker isn't a guarantee of
security---it's just best-effort risk management, very much like ECRYPT
recommending prime fields for discrete logs a decade ago.

So far this sounds like a common situation in crypto. A new attack idea
is demonstrated in limited cases. Maybe it's the tip of a much larger
iceberg; maybe not. Some designers steer clear; some don't worry. But
why is there a crusade against the papers that explore the attack idea?

The crusaders are starting from the religious notion that factors of f
mod q cannot possibly be a bad thing. There are actually two variants of
this religion:

* Religion version 1: Thou shalt have as many factors of f mod q as
possible, namely deg f linear factors. This allows PROOFS of
security: it guarantees, and cannot possibly damage, security.

* Religion version 2 (newer): The choice of q has been PROVEN to be
irrelevant to Ring-LWE security. To be more precise: Thou shalt
choose f so that there's _some_ split q close to the desired size,
but taking another q of this size is then just fine, and cannot
possibly damage security.

Attacks _exploiting_ factors of f mod q---as the attacks in the above
papers do---are heresy for both religions. That's why the crusaders have
been trying to burn these attack papers.

Beyond simply pushing the general religious tenets, the crusaders have
been pushing a specific myth that _must_ be true if the choice of q
doesn't matter: namely, the myth that everything broken by this line of
research has also been shown to be broken by other attacks that depend
only on the size of q, _not_ on the arithmetic features of q. Why worry
about the choice of q if you believe that the q-specific attacks don't
accomplish anything beyond generic-q attacks?

Let's look more closely at the tiny grains of truth that have been
wildly exaggerated to create this religion:

* Regarding religion version 1: What's true is that some reduction
theorems assume that f splits completely mod q. However, these
theorems have always been compatible with the possibility that such
choices of q are _much less secure_ than other choices of q.

What these theorems say is that a poly-time attack against
Decision-Ring-LWE for split q implies a poly-time attack against
Search-Ring-LWE for split q. There has never been any guarantee of
the _same_ level of security: the two "poly-time" instances can be
quite different. Furthermore, there has never been any guarantee
that Search-Ring-LWE for split q is as secure as Search-Ring-LWE
for other q.

* Regarding religion version 2: What's true (presumably---I haven't
checked the proofs in detail, although the core idea is easy) is
that "modulus switching" provides a poly reduction between
different choices of q. But again there has _never_ been any
guarantee of the _same_ level of security.

All of these theorems have always been entirely compatible with the
possibility that Search-Ring-LWE (and Decision-Ring-LWE and typical
Ring-LWE-based cryptosystems) is _much easier to break for split q_
than for other q---and also with the opposite possibility, or the
neutral possibility.

The bottom line is that the theorems have _never_ scientifically
justified the assertion that choosing split q maximizes security.

* Regarding the attacks: What's true is that _some_ of the cases
broken by this line of attack papers are also broken (at the same
speed or faster) by other attacks that don't depend on q. However,
"some" and "all" are quite different words!

I'm not aware of any dispute regarding the following statements:
For some cases of Ring-LWE, the attacks in these papers---attacks
that rely heavily on the choice of q---are significantly faster
than all other known attacks. Furthermore, this has been the
situation continuously for two years, starting from the publication
of paper #1.

https://eprint.iacr.org/2016/239.pdf "Provably weak instances of
Ring-LWE revisited" (Castryck--Iliashenko--Vercauteren, Eurocrypt
2016) shows that various examples given in paper #2 have error 0 in
certain known output positions, so those cases are very easily
broken by linear algebra. But the authors don't claim that this
attack covers the full scope of #1 and #2, never mind #3 and #4. In
fact, paper #4 explicitly says that its examples "are not directly
attackable using linear algebra, as in the recent paper [CIV]."

https://eprint.iacr.org/2016/351.pdf "How (not) to instantiate
Ring-LWE" (Peikert) gives a "new, unified exposition" of the attack
idea developed in the papers cited above. It's not obvious to me
what's "new" in the attack presentation, aside from superficial
changes of language---e.g., instead of reducing polynomials modulo
a factor of f, this paper uses "reduction modulo an _ideal divisor_
\q of the modulus qR", not mentioning that this is exactly the same
computation. (This is one of quite a few points in the paper where
I found myself thinking that the level of credit was far below what
is scientifically required.)

As far as I can tell, in its central attack---a restatement of the
attacks introduced in the previous papers---this paper uses factors
of f mod q in the same situations where previous papers do, and
uses them in the same ways. Certainly this paper doesn't change the
important point that the best attacks known---for some cases---
depend on exactly these factorizations.

To summarize, the "factors of f mod q can't be bad" religion has a
flimsy scientific justification---theorems that by their own terms are
far too weak to say any such thing. Furthermore, the actual attack
picture for the past two years flatly contradicts the specific myth that
this religion is forced to push regarding EHL-ELOS-CLS.

When I pointed to common myths in a previous message, Peikert accused me
of attacking strawmen. He claimed that it was "suspicious" that I didn't
provide "a single citation or quote". Okay, then, here's an actual quote
from someone pushing religion version 2:

We prove that the arithmetic form of the modulus q is irrelevant to
the computational hardness of LWE and RLWE.

This is quite seriously wrong. Nobody has proven, and nobody has any
credible hope of proving, such a strong theorem. There is a bound on the
security gap, but the bound is definitely not 0.

Here's another example of a quote from someone pushing the specific myth
that the EHL-ELOS-CLS attacks are superseded by attacks independent of
the algebraic structure of q:

The attacks in these works [EHL, ELOS, CLS] were showed to be due to
lack of noise (put differently, LWE with no error is easy).

This quote was followed immediately by citations to CIV and Peikert, and
preceded by a claim that EHL, ELOS, and CLS contained "a significant
amount of algebraic obfuscation." But the quote is amazingly far from
what's scientifically justified:

* The quote is flatly wrong in lumping all of the attack targets into
the CIV "no error" corner.

* Most readers will understand the quote as saying that the attacks
depend _only_ on the noise level, and that's also flatly wrong.

* The quote completely fails to acknowledge the critical algebraic
role of q in the attacks.

What's also interesting is the context: this quote claimed to respond to
NTRU Prime's choice of q being inert. We justified this choice in the
NTRU Prime paper as follows:

Finally, we choose q as an inert prime so that there are no ring
homomorphisms from (\Z/q)[x]/P to any smaller nonzero ring. The
attack strategies of [...] start by applying such homomorphisms; the
attacks are restricted in other ways, but we see no reason to provide
homomorphisms to the attacker in the first place.

A careful reading of Peikert's paper shows that such homomorphisms are
critical for one example after another of the attack that the paper
claims to be an "exposition" of---and yet the paper is being cited as
_opposition_ to considering the algebraic structure of q. How is it
possible for the state of the art to be so massively misrepresented?

My goal here is to clarify the technical issues, not to name culprits
and get bogged down in the question of whether those people are
malicious and/or incompetent. But Peikert seems desperate for me to name
culprits, and I think that it's impossible to avoid wondering about the
Peikert paper mentioned above:

* Why does the paper fail to mention the reduction modulo factors of
f mod q---the idea introduced by EHL, pushed further by ELOS and
CLS, and used in a practically identical way (with minimal credit)
by Peikert---until page 11?

* How will readers who get through only the abstract, or the
introduction, or even the first 10 pages of Peikert's paper, learn
that the factors of f mod q play an essential role at the boundary
between what's known to be broken and what isn't?

* If this "new, unified exposition" is supposed to help people
understand the attacks known today, why does the paper put so
little emphasis on an essential ingredient for those attacks?

Peikert often advocates a holier-than-thou unused fringe corner of
lattice-based cryptography, requiring huge dimensions and huge noise to
be able to prove that attacks with more than (e.g.) a 2^(-20) success
chance in <2^100 operations imply advances in breaking some worst-case
short-vectors-in-ideals problem. So I fully understand that Peikert
wants to emphasize that everything broken in the EHL-ELOS-CLS line of
research so far has significantly smaller noise rates. But how does this
justify an "exposition" deferring a critical algebraic feature of the
attacks to page 11?

Hearing that the attacks (so far) have noise rates below some number is
_vastly_ less helpful for understanding the attack idea than hearing
that the attacks reduce polynomials modulo a factor of f mod q. It's
easy to see how Peikert's "exposition" will deceive people regarding a
quite fundamental aspect of these attacks, especially if those people
are religious acolytes vulnerable to confirmation bias. Peikert should
instead be going out of his way to prominently _disavow_ the "factors of
f mod q can't be bad" religion and distance himself from the crusade.
Pretending that the religion doesn't exist is counterproductive.

Meanwhile Peikert accuses me of "serious false assertions and deep
mischaracterizations about the lattice cryptography literature, along
with poorly thought-through technical claims that fall apart under
inspection," and not correcting these errors, etc. My own view is that
Peikert has established an unfortunate pattern of misrepresenting what
I wrote, criticizing the results, portraying this as criticism of me
personally, and trying to distract people from what I actually wrote.
It's impossible not to notice that two of the messages that Peikert
points to are risk-management analyses of the EHL-ELOS-CLS line of
research. Why does Peikert find this line of research so threatening
that he feels compelled to resort to ad-hominem attacks?

Anyway, my preference---as illustrated in my previous message---is to
simply tackle the content of the myths without getting sidetracked on
the question of who is to blame for the myths. In particular, what
really matters here is not the role of Peikert (or anyone else), but the
fact that small factors of f mod q have been playing a role in the best
attacks known for two years and counting.

---Dan

damien...@gmail.com

unread,
Aug 3, 2016, 3:52:55 AM8/3/16
to Cryptanalytic algorithms, d...@cr.yp.to
Dear Dan,

I'm not very found at being turned into a religious zealot or even a 
crusader...

*****
Okay, then, here's an actual quote 
from someone pushing religion version 2: 

   We prove that the arithmetic form of the modulus q is irrelevant to 
   the computational hardness of LWE and RLWE. 
*****

To clarify, "someone" is Adeline Langlois and I, as reference [47] from 
the NTRUPrime paper. This corresponds to IACR eprint 2012/091:

Please have a careful look at that webpage. It says: 
  Date: received 23 Feb 2012, last revised 29 Jun 2012, withdrawn 5 Nov 2012
and again
  Available format(s): (-- withdrawn --)
and 
there is no pdf that is directly accessible.



Is this the only citation you could find? A citation from an unpublished 
draft that was withdrawn almost 4 years ago? 



Further, if you look at the paper (still accessible by clicking 
"all versions of this report"), you will notice that the paper 
clearly lives in an asymptotic world, with tildeOmega's and the like. 

As indicated on the 2012/091 webpage, that draft was withdrawn because 
superior results were in the works. Eventually, this line of research led
to: 
and
The citation you refer to does not exist in either of these two works. 
Note that these are now two refereed and published articles, as opposed 
to a withdrawn draft. 


Sincerely yours in religion,
Damien

 

D. J. Bernstein

unread,
Aug 3, 2016, 5:06:58 AM8/3/16
to Cryptanalytic algorithms
damien...@gmail.com writes:
> Is this the only citation you could find? A citation from an unpublished 
> draft that was withdrawn almost 4 years ago? 

Of course not. I happened to be in the room a year ago when you gave a
talk and made exactly the same false statement. Your slides

http://summerschool-croatia.cs.ru.nl/2015/The%20NTRU%20encryption%20scheme.pdf

are worded differently, composing this mistake with another instance of
the same mistake, first unjustifiably saying

R-LWE is no easier than Poly(n)-Ideal-SVP

with a split-q hypothesis, and then saying "The arithmetic restriction
on q can be removed"---which the reader understands as another "no
easier" statement, also an unjustifiable one. Overview:

* A is as hard as C: unjustified; claimed in slides
* B is as hard as C: unjustified; claimed in slides
* A is as hard as B: unjustified; claimed in talk (and in paper)

As for https://eprint.iacr.org/2012/091, I don't see how a withdrawal
notice saying

Note: An improved article is in the works. Final and stronger results
will be published in a paper titled "Classical hardness of Learning
with Errors" with Zvika Brakerski, Adeline Langlois, Chris Peikert,
Oded Regev and Damien Stehle

is an admission of error. Furthermore, the precise error that I quoted
is from the abstract, which you _didn't_ withdraw. Furthermore, your new
merged paper is full of similar statements promoting the same mistake.

> Further, if you look at the paper (still accessible by clicking 
> "all versions of this report"), you will notice that the paper 
> clearly lives in an asymptotic world, with tildeOmega's and the like. 

The error I'm talking about is not in the theorems. The error is in
incorrectly summarizing the theorems as having proven something much
stronger (and highly questionable), such as "no easier" or "irrelevant
to the computational hardness".

Do you not understand the effect of such wild exaggerations upon typical
readers? Look at the first paragraph of

https://en.wikipedia.org/wiki/Lattice-based_cryptography

and you'll find the following religious garbage, a composition of
several myths:

Further the Learning with Errors variants of lattice-based
cryptography come with security proofs which demonstrate that
breaking the cryptography is equivalent to solving known hard
problems on lattices.

Where do you think people get these ridiculous ideas? Do you not feel
any responsibility to correct widespread errors that start from your own
published exaggerations, and that draw unjustified levels of attention
to systems that you've published? Your work is perfectly respectable
without these exaggerations!

In my previous messages I didn't name you---my goal here is to correct
the errors, not to get sidetracked by questions of blame for the errors.
But I lose sympathy when you pretend that the religion doesn't exist.

---Dan

damien...@gmail.com

unread,
Aug 3, 2016, 6:45:54 AM8/3/16
to Cryptanalytic algorithms, d...@cr.yp.to
Dear Dan,

> But I lose sympathy when you pretend that the religion doesn't exist. 

I haven't lost sympathy. I'm always glad to see newcomers in a field, 
I find it exciting. 

Also, I don't recall having written anything about the existence of any 
religion.  I am merely commenting on the use of a citation of mine.  
I would prefer steering clear of religious debates.

> In my previous messages I didn't name you

Well... in the article, the naming is pretty clear. And the tone of your
post, using this citation as an example, was quite damning. 

Further, I feel responsibility for what I wrote: I burried that 
draft 4 years ago, and I want to make sure it actually stays burried.  
If you insist on resurrecting it in your article and on using the citation, 
you may point out in some way that this draft was withdrawn by its
authors, so that people use this citation with care.

> Furthermore, the precise error that I quoted  is from the abstract, 
> which you _didn't_ withdraw.

I'm not aware of a way to do that using eprint. When an author withdraws
a draft, it typically means the author wishes the draft is considered
as non-existent. In case it is still unclear: I consider this draft burried.


Now, let's look at the new citations. 

Let me insist on a point I made in my previous post: the contexts of 
the citations matter. By cutting sentences out of the context, it is 
possible to make those sentences mean more than intended. 

For the withdrawn article, the two follow-up works and the slides you
refer to, it is clear from the direct context that these claims hold in an 
asymptotic sense.

For example, the quote "R-LWE is no easier than Poly(n)-Ideal-SVP "
is the title of the box of Slide 21. The content of the box uses words
like "non-negligible" and the notation "Poly(n)", which refer to 
asymptotics. 
More precise statements for this claim are given in [LPR'11], which is 
cited on that slide, for members in the audience who were interested
in the claim.

Best regards,
Damien

Leo Ducas

unread,
Aug 3, 2016, 9:46:15 AM8/3/16
to Cryptanalytic algorithms, d...@cr.yp.to
Not sure if I'm part of the Religion in Dan's world, but let me mention this:


       (3) https://eprint.iacr.org/2015/971.pdf "Attacks on Search RLWE"
           (Chen--Lauter--Stange)


I must say I found the attack in the ramified case quite interesting: while many of their previous attacks applied to set-ups a bit ``designed to be broken'', this case looks like a legit Ring-LWE (though not covered by any LPR-type theorem).

I even suggested the authors some improvements on the attack, replacing the chi-quare test by a better statistical test, offering a quick implementation of it.

The script for the slightly improved ramification attack is joint [no string attached, no claims on derived results]. Instead of producing infinite amount of noise on what you believe to be the right or the wrong choice of modulus,  Dan, this is your chance to start contributing cryptanalytic facts.

As for the splitting attack in the m-th cyclotomic case, it does indeed allows to reduce the dimension of the lattice problem, but it trade it against error rate: overall it seems to make the problem harder (up to vacuously hard when the new dimension is too small).

Let's say m is even, and we want to halve the dimension: you'd take w = r^(m/2) where r is a primitive m-th root mod q, apply the ring homomorphism: zeta^(m/2) |->  r^(m/2), and the new errors coefficients will be of the form e = e' + w e'' . Because w^2 = 1 mod q and w <> +/- 1,  we must have w mod q > sqrt(q) : the error has grown by a factor at least sqrt(q). Viewed as a lattice problem, according to the literature [factor delta, page 16, eprint.iacr.org/2010/613.pdf], this is harder to solve than the original problem.

Of course the distribution of e would be a bit special, but as far as lattice algorithm goes, I wouldn't know how to exploit that. Of course this is not exhaustive (why m even, why halving), but I've always got to the same conclusion with other parameters.

My point being: contrary to what Dan seems to believe, the hardness of RLWE is not only supported by asymptotic theorems, but has also been seriously considered from the cryptanalytic side.

But maybe we are all just incompetent. In that case, we have no choice but to beg Dan to actually produce cryptanalytic facts.

For example, here is a (still to be reviewed) cryptanalytic fact, from Fouque and Kirchner [http://eprint.iacr.org/2016/717]:

with respect to our current understanding of lattice alogirthms (BKZ), independently of the Ring or the modulus, NTRU is strictly weaker than Ring-LWE for the same parameters.

In that regard, the best fallback option would be a ``Ring-LWE-Prime'' cryptosystem, or even a ``Module-LWE-Prime'' following my previous reasoning above in this thread.

- Leo

Leo Ducas

unread,
Aug 3, 2016, 9:47:45 AM8/3/16
to Cryptanalytic algorithms, d...@cr.yp.to
And.... here is the promised attachment.
809.sage

Leo Ducas

unread,
Aug 3, 2016, 11:07:10 AM8/3/16
to Cryptanalytic algorithms, d...@cr.yp.to


Let's say m is even, and we want to halve the dimension: you'd take w = r^(m/2) where r is a primitive m-th root mod q, apply the ring homomorphism: zeta^(m/2) |->  r^(m/2), and the new errors coefficients will be of the form e = e' + w e'' . Because w^2 = 1 mod q and w <> +/- 1,

Being a bit annoyed by all this, I reconstructed this old argument a bit too quickly it seems:

of course w = -1 is possible (e.g. when m is a power of 2), but this morphism doesn't decrease the dimension. One should instead send zeta^(m/4) |->  r^(m/4) for anything to happen.

-- L

Christopher J Peikert

unread,
Aug 3, 2016, 5:26:17 PM8/3/16
to cryptanalytic-algorithms
Dan has gone from supporting his strawmen with no evidence at all (and ignoring abundant contrary evidence, including an influential survey from his own book), to supporting them by quoting a withdrawn, disavowed paper -- without bothering to mention that fact.  Of course, this was so as not to assign "blame" -- how noble.  (As for the second quote, I can't get Google to find the paper from which it was taken.)  With a critic like this, who needs supporters?

In this case, the strawman is "factors of f mod q [equivalently, homomorphisms] can't be bad."  This is a vacuously false statement, because it leaves out a central piece of the picture: the choice of error distribution.

One can easily define pathological error distributions that make Ring-LWE trivially breakable, and only by using homomorphisms.  For example, define the error to always be zero under one homomorphism, and uniformly random under all the rest.  Now the best -- indeed, only -- way to distinguish the Ring-LWE distribution from uniform must use the homomorphism!  Does this mean anything for the (in)security of homomorphisms in general?  Of course not: pathological errors are pathological.

A main finding from CIV'16 and P'16 is that all the EHL-ELOS-CLS instantiations have highly incongruous error distributions: they are very "poorly spread" over the ring, or modulo certain small-norm ideals.  This is because the rings and ideals are highly "skewed" as lattices, and the error distributions don't compensate for this.  (See, e.g., Figures 2 and 3 on page 18 of P'16.)  This outcome is hardly surprising, because the rings, moduli, and error distributions were specifically sought out for their insecurity.  One could similarly design weird variants of NTRU, NTRU Prime, etc. that can be broken using almost any algebraic property one wanted to malign.

What Dan's post left out is the other side of P'16 (Section 5), which shows two important facts:

(1) All the insecure error distributions differ very significantly in both "width" and "shape" from those recommended by the LPR'10 "worst-case hardness of search" theorem (which applies to any number field).  In particular, the insecure distributions are a factor of between \approx \sqrt{n} to n^{3/2} too narrow, depending on the direction; see Section 5.3.  (This is particularly relevant to Leo's message about prime cyclotomics and their ramified primes.)

(2) Any instantiation supported by the theorem (or even with slightly narrower error), for any modulus, is provably immune to the classes of attack defined in EHL-ELOS-CLS-CIV.  This is because the error is extremely "smooth" (nearly uniform) modulo any ideal of norm < 2^n, so the moment you reduce modulo the ideal (apply the homomorphism), you get random junk that's statistically independent of the secret.

For instantiations that aren't supported by theorems, we can still take a very important lesson from all this: the error distribution should be very "smooth" modulo any of q's ideal divisors of not-too-large norm -- or perhaps even modulo any such ideal, whether or not it divides q.  Both are true for NewHope, for example.


     https://eprint.iacr.org/2016/351.pdf "How (not) to instantiate
     Ring-LWE" (Peikert) gives a "new, unified exposition" of the attack
     idea developed in the papers cited above. It's not obvious to me
     what's "new" in the attack presentation

Really?  Perhaps finish the very sentence you quoted (page 2), which says: "We give a new, unified exposition of the attacks in terms of the geometry of dual ideals, and provide formal analysis to explain certain phenomena that were previously only exhibited by experiments."

Or read the lead-in to Section 3 ("Attack Framework"), which says: "In a nutshell, the attacks exploit the existence of one or more sufficiently short elements in the dual ideal of some small-norm ideal divisor of qR."

As the paper shows, the existence of short dual elements (relative to the error distribution) completely explains why all the attacks work, whereas in many cases we previously had only experiments or heuristics, and certainly no solid understanding.  Moreover, the absence of short dual vectors even lets us prove immunity to these classes of attacks.

(Regarding novelty of the exposition, none of the prior works use dual ideals in any fashion.)

Meanwhile Peikert accuses me of "serious false assertions and deep
mischaracterizations about the lattice cryptography literature, along
with poorly thought-through technical claims that fall apart under
inspection," and not correcting these errors, etc. My own view is that
Peikert has established an unfortunate pattern of misrepresenting what
I wrote, criticizing the results, portraying this as criticism of me
personally, and trying to distract people from what I actually wrote.

I think psychologists call this "projection."

Let me be very specific about some of your most significant false claims; do you stand by them or not?

* Concluding from ABD'16 that "the LPR'10 reduction is vacuous for certain number fields."  Both I and an author of ABD'16 have said why this is false: the huge approximation factors.

* Concluding from CLS'15 that "If [true statement about LPR'10] then the new paper also breaks worst-case Ideal-SVP for the targeted fields."  This is just false (and CLS'15 don't claim anything like it).

* Regarding mischaracterizations of the literature, see the previous two items, or your description of P'16 from this thread, or any of the strawmen from the "recognizing lattice mythology" thread, for a start.

It's impossible not to notice that two of the messages that Peikert
points to are risk-management analyses of the EHL-ELOS-CLS line of
research.

Got it -- "the LPR'10 reduction is vacuous for certain number fields" and "the new paper also breaks worst-case Ideal-SVP for the targeted fields" are just "risk management."  It's no wonder you're suddenly so interested in not casting blame...

D. J. Bernstein

unread,
Aug 6, 2016, 4:42:05 PM8/6/16
to Cryptanalytic algorithms
Thanks to Leo for adding additional algorithmic content, which is the
whole point of this mailing list. No thanks to Peikert for engaging in
Trump-style fabrications and evasions; I'll get back to him later.

This message focuses on the title topic. To recap what's important:

* There's an interesting new line of research (most importantly EHL,
ELOS, CLS) developing Ring-LWE attacks that rely on how the ring
polynomial f factors mod q. This is particularly interesting given
earlier claims that the arithmetic form of q cannot hurt security.

* The situation continuously since 2014 is that these are the best
attacks known against some cases of Ring-LWE. One can't understand
the boundary of what has been broken without looking at how f
factors mod q.

* These papers are, however, opposed by crusaders falsely claiming
that this entire line of research has been superseded by generic-q
attacks, and continuing to insist that factors of f mod q cannot
possibly damage security. These crusaders are deceiving users and
corrupting allocation of cryptanalytic effort.

It is clearly in the interest of the wider cryptographic community to
have clear multiple-source confirmation of the scientific facts:

* yes, the best attacks known against some cases of Ring-LWE exploit
factors of f mod q;

* no, it is definitely not true that this line of research has been
superseded by "LWE with no error is easy" or any other generic-q
attack strategy;

* yes, it is possible that factors of f mod q damage security in
cases that have been seriously proposed for deployment;

* yes, further cryptanalytic research needs to happen so that we can
figure this out;

* no, it it not true that anyone has proven that the arithmetic form
of q is irrelevant to the hardness of Ring-LWE.

The scientists who seem to benefit from confusion on this topic have a
special responsibility to help dispel the confusion. I'm disappointed
that so little confirmation has appeared in this thread.

Damien Stehle writes:
> Let me insist on a point I made in my previous post: the contexts of 
> the citations matter.

When you use a phrase with a clear literal meaning---for example, when
you say that a variation is "irrelevant to the hardness" of a problem,
or that one problem is "no easier" than another---the reader can,
should, and will interpret this phrase to mean exactly what it says.

As a writer, do you seriously expect to create "context" that redefines
normal words in the language that you _seem_ to be using, and blame the
reader for not reading your redefinitions? This is madness. You cannot
say "no easier" if you actually mean something else.

There's a wealth of standard terminology and notation for concisely
summarizing common concepts---including reductions. It's often helpful
to extend the language, and there's a practically unlimited space of
concise terminology and notation that hasn't been allocated and that
won't create miscommunication. So why do you choose terminology that
already has a meaning and _does_ miscommunicate to typical readers?

When the actual situation is, e.g., "at most polynomially easier under
certain conditions", you can't say "no easier". That's exaggeration,
oversimplification, deception. This is not a subtle issue.

[ regarding non-withdrawal of the abstract containing the quote ]
> I'm not aware of a way to do that using eprint.

The same page that you used to make a revision, and later a withdrawal,
has a big "Abstract" box that you can use to edit the abstract.

But you're missing the main point here. You ought to be horrified at the
thought that a false statement that you published---

We prove that the arithmetic form of the modulus q is irrelevant to
the computational hardness of LWE and RLWE.

---is being used as part of a campaign to suppress legitimate research.
You're obliged to issue an erratum, a clear and specific admission that
the statement is false. This is something that you still haven't done.

Revising your abstract to remove the statement (which you also haven't
done) would be wildly inadequate---it is not a clear and specific
admission of error. It also isn't necessary, and in some ways it's
undesirable, since it can make the erratum harder to find. Standard
practice is to leave the original statement where it is and put a
prominent erratum very close to it.

What particularly concerns me is that this false statement isn't an
isolated incident. There's an extremist fringe that is _constantly_
issuing similar exaggerations regarding lattice-based cryptography,
refusing to take responsibility for the consequences of their false
statements. Why don't you take action to separate your work from this
fringe? As I said before, your research is perfectly respectable without
these exaggerations.

---Dan

Christopher J Peikert

unread,
Aug 6, 2016, 7:58:03 PM8/6/16
to cryptanalytic-algorithms
On Sat, Aug 6, 2016 at 4:41 PM, D. J. Bernstein <d...@cr.yp.to> wrote:
Thanks to Leo for adding additional algorithmic content, which is the
whole point of this mailing list. No thanks to Peikert for engaging in
Trump-style fabrications and evasions; I'll get back to him later.

...says the guy throwing around evidence-free accusations of "religious crusades," suppression of research, and hatred of babies (I'm guessing).  This is fun!

Let's make cryptanalysis great again.  To extend the EHL etc. line of attacks, and also understand their limitations, start with Section 3.2 of P'16, which describes a strict generalization of those attacks (as explained at the end of Section 3.2.1):

"Let \mathfrak{q} \subseteq R be an ideal divisor of qR, having norm N(\mathfrak{q}) := |R/\mathfrak{q}|, and let \psi be a continuous error distribution over K_{\mathbb{R}}...

When N(\mathfrak{q}) is not too large, the preceding observations potentially yield attacks:
  • If \psi \bmod \mathfrak{q} is detectably non-uniform, then we immediately have a distinguishing attack against the decision problem...
  • Similarly, if \psi has one or more coefficients (relative to some fixed \mathbb{Z}-basis of \mathfrak{q}) that usually do not ``wrap around'' modulo \mathbb{Z}, then we can attack search by reducing to errorless LWE...
  • On the positive side, if \psi=D_{r} is a continuous Gaussian of parameter r \geq \eta_{\varepsilon}(\mathfrak{q}) for some very small \varepsilon, then neither of the attacks work, because every coefficient of the error ``wraps around,'' and moreover, the reduced error is statistically close to uniform."
(See also Section 3.3 for how all this translates to discrete error.)

​From all this one can construct endless choices of rings R, ideals \mathfrak{q}, and error distributions \psi for which the above-described attack will likely be the best one known, and might even stay that way for eternity.  Just ensure a short nonzero w \in \mathfrak{q}^\vee (the dual ideal) relative to the width of \psi in the direction of w.

What you won't be able to construct is a vulnerable instantiation (of crypto-relevant dimension) that's covered by the worst-case-hardness-of-search theorem (see Section 5).

One can also apply the above tests to any system proposed for practical deployment.

A few more references:

   * There's an interesting new line of research (most importantly EHL,
     ELOS, CLS) developing Ring-LWE attacks that rely on how the ring
     polynomial f factors mod q. This is particularly interesting given
     earlier claims that the arithmetic form of q cannot hurt security.

See Section 3 and in particular Corollary 3.2 of BLPRS'13 for a tight "modulus switching" reduction for LWE, which easily generalizes to Ring-LWE.  It lets one switch to a modulus of any arithmetic form, at the cost of increasing the error width by a modulus-independent amount.  The effect on the error rate \alpha depends inversely on the size of the target modulus.

What this means is that for Ring-LWE for a modulus of any arithmetic form is indeed at least as hard as for any other modulus, with narrower error rate as indicated by the theorem.

   * These papers are, however, opposed by crusaders falsely claiming
     that this entire line of research has been superseded by generic-q
     attacks, and continuing to insist that factors of f mod q cannot
     possibly damage security. These crusaders are deceiving users and
     corrupting allocation of cryptanalytic effort.

Note that the only evidence presented for "crusaders falsely claiming..." is a single anonymous quote that Google can't find.  Then again, whoever wrote it might be a crusader; we'll have to take Dan's word for it.

As for the public literature, CIV'16 shows that all the concrete instantiations from ELOS'15 fall to a generic-q, errorless LWE attack on search.  And P'16 says at the top of p3:

"We also show that the instantiations from [CLS15, CLS16], with slightly narrower error distributions, fall to the same kind of attack [errorless LWE against search]. Finally, we give formal analyses showing why the (unmodified) instantiations of [CLS15, CLS16] are broken by a different but closely related distinguishing attack."  (See Sections 4.3 and 4.4.)
 
The scientists who seem to benefit from confusion on this topic have a
special responsibility to help dispel the confusion. I'm disappointed
that so little confirmation has appeared in this thread.

I hope the above material has helped to dispel your or anyone else's confusion.
 
Damien Stehle writes:
> Let me insist on a point I made in my previous post: the contexts of 
> the citations matter.

When you use a phrase with a clear literal meaning---for example, when
you say that a variation is "irrelevant to the hardness" of a problem,
or that one problem is "no easier" than another---the reader can,
should, and will interpret this phrase to mean exactly what it says.

We're still talking about a preprint that is prominently marked as withdrawn, right?  Just checking.

You're obliged to issue an erratum, a clear and specific admission that
the statement is false. This is something that you still haven't done.

Great!  We all look forward to the same for your false claims about the implications of CLS'15 and ABD'16.

Christopher J Peikert

unread,
Aug 7, 2016, 9:37:17 AM8/7/16
to cryptanalytic-algorithms
Lest any religious zealots be deceived, I should add an important point:

See Section 3 and in particular Corollary 3.2 of BLPRS'13 for a tight "modulus switching" reduction for LWE, which easily generalizes to Ring-LWE.  It lets one switch to a modulus of any arithmetic form, at the cost of increasing the error width by a modulus-independent amount.  The effect on the error rate \alpha depends inversely on the size of the target modulus.

The tight reduction from this Corollary refers to a version of LWE that the attacker must solve for any error rate below a given bound; hence the \leq \alpha, \leq \beta in the subscripts.  This is arguably a more realistic model than error rate = \beta, since every known attack works better for smaller error (all else being equal).

But adversaries are adversarial, and can conceivably "intentionally fail" if they solve LWE and find that the error looks "too small."  Hence, to make the reductions rigorous for error rates = \beta we need to resort to the seemingly artificial technique of adding different extra amounts of error.  This can hurt the reduction's runtime (its tightness) in various ways, depending on whether we have a search or decision problem, how widely the unknown error rate can vary, how many samples are used, etc.  As always, one should read and understand the theorem statements.
Reply all
Reply to author
Forward
0 new messages