Dear Dan, dear PQC-forum
(cc: Kirsten, Tanja, Karl, Alice, and Christine)
TL;DR:
- We thank the authors for the bug report on [DPW19]. We repeat the
already stated limitations of [DPW19].
- We note that conjecture of slide 47 (a subexp_{1/2} algorithm for
(1+ε)-Ideal-SVP) is given absolutely no justification or explanation.
- With what is given, and standard heuristic, we infer that the proposed
algorithm is no better than existing literature [PHS19].
- We invoke Sagan's standard.
First, I (Leo) would like to thank you for spotting the "graphing typo"
from our paper [DPW19]. It has now been corrected on the eprint.
I'm also jumping on the opportunity to point at the stated limitation of
our lower bound to specific optimization to the poly-time algorithm
obtained by combining [EHKS14]+[CDPR16]+[BS17]+[CDW17]. Indeed, we
already acknowledge in the paper that S-unit approach [PHS19] with
sub-exponential complexity already violate this lower bound. This is
explicit in intro and again in section 7.4. Breaking it is unsurprising and
not new.
Thank you also for sharing your interesting work in progress. The new
way of constructing S-units sounds quite interesting.
However, we (Leo and Alice) are surprised to see a sudden jump to an
extraordinary conjecture on slide 47: a subexp_{1/2} algorithm for
(1+ε)-Ideal-SVP. No justification or explanation is given. The rest of
the talk includes irrelevant experiments in dimension 32 and 64. These
experiments concern a different algorithm than the one of slide 47. In
particular, the dimension 64 experiment brute-force collisions in the
class group: this would unavoidably scale super-exponentially in time
and memory just because of the cardinality growth of this group.
Our analysis of the algorithm of slide 47 suggests that it leads to a
sub-exponential factor of at least subexp_{1/4}(n), similar to prior
claims [PHS19]. Indeed, this algorithm seems to follow [PHS19] with the
following variations:
- a different construction for the set of S-units (precomputation)
- the use of d = subexp_{1/2}(n) many places rather than O(n)
In particular, the main loop appears to be essentially the Laarhoven
[Laa16] algorithm, written multiplicatively. In the literature, the key
remark for analyzing how often Log(u/v) reduces Log(g) is the following
fact:
in dimension d, if x,y are uniform on spheres of radii a, b respectively
(with a > b), then the probability that y reduces x (i.e., ||x-y|| <
||x||) is (1-(b/2a)^2)^{Θ(d)}.
According to this heuristic, the probability of reducing log(g) to a
size comparable to Log(u/v) would be *ridiculously* small : exp(-Θ(d)) =
exp(-subexp(n)). This heuristic might need refinements given the integer
and sparse nature of the Log on the finite places. Yet, even just
projecting on the O(n) infinite places, we do not see how your algorithm
could find something smaller than Ω(n^1/4) * ||Log(u/v)|| for the given
parameters. That leads to an approximation factor of at least
subexp_{1/4}(n), and maybe much more because of the huge amount of
ignored finite places. That is no better than [PHS19] for the same
online running time.
Of course, this may miss subtleties not presented on your slide.
But after all, the burden of the proof lies on your side.
Extraordinary claims require extraordinary evidence.All the best
-- Léo Ducas & Alice Pellet-Mary