to follow up on this, maybe I’m overlooking something, but isn’t there a
discrepancy in the NTRUPrime paper itself on the cost of enumeration?
a) page 19 first claims it has complexity `2^O(β^2)`
b) page 19 then references Kannan’s paper which gave an algorithm in
complexity `β^O(β)` and refers to enumeration as “slightly
super-exponential” (for which `2^O(β^2)` wouldn’t qualitfy in my book?)
Equation 4 of the NTRU Prime paper is immediately prefaced by a disclaimer thatit is an "estimate of the work required to perform BKZ-2.0". It would be asignificant misreading to interpret Equation 4 as applying generically toBKZ/HKZ-type algorithms.
I misspoke earlier when I said that they use fixed block size preprocessing. Looking atChen's thesis ([1], Table 5.2), it's clear they use progressive BKZ preprocessing.The optimal preprocessing will not be recursive BKZ w/ block size k - O(log(k)) for all k. That'san asymptotic result, and Chen and Nguyen tried to find optimal preprocessings in specificdimensions.
I'm very curious about explicit cost estimates for algorithms with known 2^O(k log k) asymptotics.Does anyone claim a leading constant smaller than 0.27?
If you want people to believe that you care about accurate security
estimates, it might help to quote and respond to what I actually wrote
on this topic, and try to identify any remaining points of disagreement,
instead of fabricating strawman arguments.
> 2) in the paper, you give a lot of room the the meet-in-the middle
> attack and its hybrid version. Aren't those attack also very memory
> expensive ? Why not discarding it on the same ground ?
Huh? What's the alternative to the hybrid attack supposed to be?
The only fast methods known for finding a short basis, or merely _some_
basis, or merely a basis of a small-index sublattice, are exploiting
extremely special algebraic features of number fields with small Galois
groups---producing cyclotomic units, for example, and elliptic units.
The NTRU Prime paper recommends against fields with small Galois groups.
The only fast methods known for finding a short basis, or merely _some_
basis, or merely a basis of a small-index sublattice, are exploiting
extremely special algebraic features of number fields with small Galois
groups---producing cyclotomic units, for example, and elliptic units.
The NTRU Prime paper recommends against fields with small Galois groups.
I got that, yes, thanks. In this light, do you see any deep algebraic reason why X is
always a unit in those rings, or is it just about re-writing X^p = X - 1 (again, a non
rhetorical question, I'm hoping to learn something, not to argue about claims here).
Computing _any_ basis of the log-unit lattice is a famous problem in
computational algebraic number theory. The NTRU Prime paper quotes
Cohen's classic book [23, page 217] calling this one of the five "main
computational tasks of algebraic number theory"
You are incorrectly equating expensive operations with cheap operations.
Waiting for 2^152 accesses to 2^83.7 faraway bits of memory takes more
physical time _and_ more hardware _and_ more energy than carrying out
2^129.7 enumeration steps on each of 2^60 small parallel processors.
Finally, about the widely used quadratic fit :
"My understanding is (still) that a quadratic fit is appropriate for this data."
"I'm going to stand by my choice of a quadratic fit since I'm still not convinced that a klog k fit isrealistic for the relevant range of block sizes. It's true that for very large k you will want to switchto a recursive preprocessing. Once you do, your scaling from that point will be like 2^O(k log k),but I'm not convinced this is relevant for attacks of cost less than 2^256."
Preprocessing stronger than LLL is used from block size ~ 60.
So unless you are interested in 25 bits of security, it is used.
The reason behind the choice of 2^O(k log k) is that :
- it is the proven asymptotical complexity
- it is the heuristic asymptotical complexity
Let's go back to the k^2 vs k log k fit question. If we only consider block sizes between
100 and 400 then the difference |f(k) - g(k)| between
f(k) = 0.270189 k log(k) − 1.0192 k + 16.10 (from [2]), and
g(k) = 0.000784314k^2 + 0.366078k − 6.125 (from [3])
is less than 10. This isn't surprising since they're fit to the same data between 100 and 250.
At k=400 we have f(400) = 256 and g(400) = 266.
Some of the other estimates that you gave in other posts are much closer together:
So, maybe you copy + pasted the wrong equation?
(I'm just trying to bring some clarity to what is currently a very confusing conversation.)
Hi John,Maybe I'm being dumb, but I'm unable to replicate this:Let's go back to the k^2 vs k log k fit question. If we only consider block sizes between
100 and 400 then the difference |f(k) - g(k)| between
f(k) = 0.270189 k log(k) − 1.0192 k + 16.10 (from [2]), and
g(k) = 0.000784314k^2 + 0.366078k − 6.125 (from [3])
is less than 10. This isn't surprising since they're fit to the same data between 100 and 250.
At k=400 we have f(400) = 256 and g(400) = 266.In particular, when I plot these two functions in the proposed range, I get wildly different values.
Christopher J Peikert writes:
> We should also remember that CGS-style attacks -- which are why we are
> discussing the unit group in the first place
No.
I believe that I'm entitled to an acknowledgment of the actual history
here.
I fully realize that, in Cryptography According To Chris, this track
record is _not_ a cryptographic success story. What matters for Chris is
the extreme question of what has already been broken.
“Lattice-based” cryptosystems are not based on “lattices” per se, but rather on certain computational problems on lattices. There are many kinds of lattice problems, not all of which appear to be equally hard—therefore, not all lattice-based cryptosystems offer the same qualitative level of security. For ideal lattices, the choice of problem matters at least as much as the choice of ring.
... Rings themselves are not “easy/weak” or “hard/strong”—problems and their instances are. The attack [on SV/Soliloquy] shows that specializing a problem too much, or relying on random instances whose distribution we don't really understand, can introduce unexpected weaknesses.
> None of Ring-LWE, NTRU, or NTRU Prime give out principal ideals at
> all, much less ones with short generators.
The whole reason that these SV attacks are looking for generators is
that the SV secret can be viewed as a generator of an ideal that's
publicly known---and of course the details of this view force the
generator to be short. The obvious strategy to adapt this to other
systems is to view other secrets as short generators of public ideals.
> we should remember that we are not dealing with the *general* problem
> of unit group computation (i.e., on arbitrary number rings), but
> rather a very specific one in rings of the form Z[x]/(x^p-x-1) for
> prime p.
Similarly, the extensive literature on unit-group computation includes
some techniques that work for all fields, some techniques that are
limited to quite unusual fields, and some techniques in between.
> As seen above, it is easy to find at least a few short units in such
> rings; I don't know whether that's the case for arbitrary number rings
> or not.
Exercise: Learn something about the slightly more general theory of
S-units (start from, e.g., Chapter 7 of Cohen's followup book). Then
figure out how, given _any_ number ring, you can trivially write down a
few independent S-units for suitable S.
The distinction between units and S-units is minor from theoretical and
computational perspectives, and also doesn't seem relevant to attacks.
The distinction between units and S-units is minor from theoretical and
computational perspectives, and also doesn't seem relevant to attacks.Really? The analogous "log-S-unit attack" wouldn't even give you an element of the input ideal, but instead one in some other fractional ideal with arbitrary extra (positive or negative) powers of the ideals in S. This is pretty relevant, to say the least: the non-unit S-units can only make things worse, so we're better off just throwing them out to start with, thus putting us right back in the plain-unit regime.
Chris wrote: "None of Ring-LWE, NTRU, or NTRU Prime give out principal ideals at all, much less ones with short generators. So while the question of computing short units is certainly interesting in its own right, it's currently not known whether it means anything for the security of these systems."
Dan wrote: "The whole reason that these SV attacks are looking for generators is that the SV secret can be viewed as a generator of an ideal that's publicly known---and of course the details of this view force the generator to be short. The obvious strategy to adapt this to other systems is to view other secrets as short generators of public ideals. The conceptual gap between "principal ideal" and "principal ideal with short generator" simply isn't relevant to any of these attacks."
Dan Wrote: "Automorphisms play a huge theoretical and computational role in the massive literature on algebraic number theory---and in an increasing number of successful attacks against ideal-lattice-based cryptography. The distinction between units and S-units is minor from theoretical and computational perspectives, and also doesn't seem relevant to attacks."
Chris Wrote: "One can recover some kind of discreteness by adding extra "coordinates" to the log-embedding, corresponding to the non-Archimedian "places" for the prime ideals in S. (This is all standard.) But this new space is non-Archimedian, so it has little if any connection with the geometry of the ring, where we are trying to find a short element of a given ideal."
I was recently pointed to the long awaited paper about NTRUPrime: https://ntruprime.cr.yp.to/ntruprime-20160511.pdf
I find the proposal intriguing, and worth consideration. Unfortunately, the current concrete security analysis is bogus and the security claims inaccurate. A quick counter-analysis suggests the security of the proposal is overestimated by about 75 bits.
The first mistake is equation (4) (top of page 19) in Estimate(β,n,p). The chosen model for the cost of enumeration doesn't match the asymptotic of the BKZ/HKZ types of algorithm. The given model is in 2^O(β^2), while the best known algorithm (Kannan's SVP/HKZ algorithms) runs in time 2^O(β log β). Provable yet reasonably efficient version of Kannan's algorithm were recently studied by Micciancio and Walter [http://eprint.iacr.org/2014/569]. More generally, even BKZ+Enum has (heuristically ?) cost 2^(β log β) when the blocksize is appropriately chosen: this is discussed in [https://eprint.iacr.org/2015/046.pdf].
The second mistake stems from the conventional misconception that sieving algorithms are only useful to solve SVP. The authors only consider the cost of Sieving on the full lattice (or maybe only half of it, since the lattice natural has dimension 2p, but the cost is take to be 2^{.292p} in Alg 1.). This is not true: sieving can be used in replacement for enumeration inside BKZ. This brings the cost of BKZ down to poly(n)*2^(.292 β) (where the poly may be make linear with various heuristic tricks as progressive-BKZ, I personally recommend considering this poly(n) factor to 1 for any security claim consideration.).
Tweaking the script PQsecurity.py of Newhope*, which do take account for sieving inside BKZ, one obtains that BKZ algorithm will find the secret key via the ``primal attack'', when ran with blocksize β=480 on the lattice constructed from m=600 equations mod q out of the 739. This translate to λ = .292β = 140 bits of security, or a slightly more if one is ready to make a wild guess** on the polynomial factors.
In conclusion, the schemes offers a fair amount of security according to current classical attacks, but far below the 215 bits claimed. In particular, it offers less security than ntruees787ep1 (but this is obvious since the dimension is smaller, the error density similar, and the modulus larger) and much less security than NewHope (it is more similar to JarJar).
I do understand the intent of minimizing the presence of algebraic handles, and definitely think there is some value of an alternative to cyclic / cyclotomic rings. But with respect to ``known attack'' rather than ``yet to be discovered attacks'', the NTRUprime proposal is significantly less conservative than what it is compared to, so I am not sure the comparison of Fig 1.1 is as significant as one would hope.
When re-parametrized properly to compare NTRUprime to other schemes, it shouldn't be surprising to have a gap of performances with cyclotomic. Considering how fast all this already is, a factor 2x on time could be considered perfectly acceptable. It could also be possible to optimize the NTRUprime scheme a bit: Theorem 2.1 is an overkill, and tighter bounds could certainly be established probabilistically.
-- Leo
* The currently available script and paper [https://eprint.iacr.org/2015/1092.pdf] are not accurate (fortunately it underestimate the cost of the dual attack), in 6.4 equation (2) should instead read
- 2 π^2 τ^2 ≥ ln (ε/4)
A repaired script is attached, and we are doing our best to push the final version of NewHope in the shortest delays.
** Part of the hidden factor of the cost of sieving could be amortized inside BKZ, by some recycling trickery. I advise against making wild guesses.