Non-randomness of S-unit lattices

932 views
Skip to first unread message

D. J. Bernstein

unread,
Oct 23, 2021, 3:52:01 PM10/23/21
to pqc-...@list.nist.gov
In a recent talk https://cr.yp.to/talks.html#2021.08.20, I said

What makes this algorithm go quickly is the ability to find short
vectors in the S-unit lattice, small S-units. Now this might sound
circular because the whole starting problem was to find short vectors
in a lattice, namely I, and now I'm telling you that what's really
important for speed is finding short vectors in a lattice, namely the
S-unit lattice. But what makes this work is, the S-unit lattice is an
amazingly special lattice with all sorts of wonderful algebraic and
analytic features, which allow you to just really efficiently write
down a bunch of really short vectors.

This talk was the first announcement of the results of a massive
research project. I'm pleased to announce that the first paper on the
project results is now available. This paper is from Tanja Lange and me,
and focuses on the special analytic features mentioned in the quote
above as a prerequisite for the performance of the algorithm. The paper
has title "Non-randomness of S-unit lattices" and the following abstract:

Spherical models of lattices are standard tools in the study of
lattice-based cryptography, except for variations in terminology and
minor details. Spherical models are used to predict the lengths of
short vectors in lattices and the effectiveness of reduction modulo
those short vectors. These predictions are consistent with an
asymptotic theorem by Gauss, theorems on short vectors in almost all
lattices from the invariant distribution, and a variety of
experiments in the literature.

S-unit attacks are a rapidly developing line of attacks against
structured lattice problems. These include the quantum
polynomial-time attacks that broke the cyclotomic case of Gentry's
original STOC 2009 FHE system under minor assumptions, and newer
attacks that have broken through various barriers previously claimed
for this line of work.

S-unit attacks take advantage of auxiliary lattices, standard
number-theoretic lattices called S-unit lattices. Spherical models
have recently been applied to these auxiliary lattices to deduce core
limits on the power of S-unit attacks.

This paper shows that these models underestimate the power of S-unit
attacks: S-unit lattices, like the lattice Z^d, have much shorter
vectors and reduce much more effectively than predicted by these
models. The attacker can freely choose S to make the gap as large as
desired, breaking through the core limits previously asserted for
S-unit attacks.

URL: https://cr.yp.to/papers.html#spherical

---Dan
signature.asc

Christopher J Peikert

unread,
Oct 26, 2021, 8:01:28 AM10/26/21
to pqc-forum
Dear all,

On Sat, Oct 23, 2021 at 3:52 PM D. J. Bernstein <d...@cr.yp.to> wrote:
In a recent talk https://cr.yp.to/talks.html#2021.08.20, I said

The conclusion of this talk made the following striking algorithmic claim: "Heuristics imply [Hermite factor] <= n^{1/2+o(1)} in time exp(n^{1/2+o(1)})" for cyclotomic Ideal-SVP.

More than nine weeks later, the announced paper is quite far from any substantiation of this claim (and does not attempt to provide one).

Prior works already noted ways in which log-unit and S-unit lattices differ from random lattices, and the new paper makes this much more precise in various ways---but that is certainly not enough to justify the above claim.

Since so much time has passed already, I am curious whether the claim is still in effect, and when the community can expect to see it substantiated.

(As additional context: more than a month ago I privately asked the speaker for a justification, but received no reply. I also asked others who were listed as collaborators, and they said they do not know how to substantiate the claim. So, the situation is certainly unusual, and some clarity is due.)

Sincerely yours in cryptography,
Chris

D. J. Bernstein

unread,
Oct 30, 2021, 4:50:58 PM10/30/21
to pqc-...@list.nist.gov
Executive summary: The records show Dr. Ducas, Dr. Pellet-Mary, and
Dr. Peikert pinpointing the central topic of dispute. Their position has
been debunked by the new paper https://cr.yp.to/papers.html#spherical.

Details follow.

Given the situation, it's appropriate for me to release a full copy (see
below) of Dr. Peikert's message to me dated 24 Sep 2021 21:30:11 -0400.
Everyone can now see that Dr. Peikert highlighted "the difficulties
raised by Léo Ducas & Alice Pellet-Mary".

The statement from Dr. Ducas and Dr. Pellet-Mary was a pqc-forum message
dated 24 Aug 2021 07:45:56 -0700, disputing my talk on S-unit attacks
given four days earlier (https://cr.yp.to/talks.html#2021.08.20). There
is a crystal-clear gap between

* the 20 August talk saying, from experiments and number-theoretic
heuristics, that the attacks work (getting better and better as #S
increases, as the experiments illustrate); and

* the 24 August statement saying, from standard lattice heuristics,
that the attacks have "*ridiculously* small" success probability
(exponentially small in #S).

It's also clear where the gap is coming from: the 24 August statement is
highly inaccurate, because the standard lattice heuristics, _when
applied to S-unit lattices_, are highly inaccurate. This is shown in
https://cr.yp.to/papers.html#spherical, "Non-randomness of S-unit
lattices", which is the topic of this thread and the first paper coming
out of the underlying research project. (The 20 August talk had already
announced the paper's results, obviously in much less detail.) See my
email dated 23 Oct 2021 21:55:07 +0200 for a full reply to the statement.

Regarding Dr. Peikert's new claim (in public email dated 26 Oct 2021
08:01:11 -0400) that this paper, rather than being directly on point,
merely makes more "precise" something that was already known: No, the
claim of "*ridiculously* small" probability can't be swept under the rug
and retroactively interpreted as a mere matter of precision. This isn't
some subtle dispute.

Regarding NISTPQC, NIST rushed to issue judgment on 28 Aug 2021 17:10:52
-0700, cc'ing the sender of the 24 August statement and evidently giving
the statement vastly more weight than the talk. No rationale was stated
for this bias.

Where's the erratum from Dr. Ducas and Dr. Pellet-Mary? Where's the
admission from Dr. Peikert that the "difficulties" highlighted in his
email are debunked by the first paper from this project? Where's the
apology from NIST for mishandling this matter?

If the answer is that it's inappropriate just one week after this paper
appeared to ask for public admissions that the paper is correct: Was it
appropriate for Dr. Ducas and Dr. Pellet-Mary to publicly claim the
opposite four days after the talk? Was it appropriate for NIST to adopt
those conclusions four days later? Is it appropriate for Dr. Peikert to
be complaining that "so much time has passed"? At this point there's
ample documentation showing how massive the scope of this project is:

* One part of the talk---the special analytic features of S-unit
lattices---occupies https://cr.yp.to/papers.html#spherical, a
58-page paper.

* Something that the talk presented in much more detail, efficiently
constructing the full p-unit group for cyclotomics via Jacobi sums
and square roots, has been written up by another team and occupies
https://eprint.iacr.org/2021/1384, a 54-page paper, along with math
background in https://arxiv.org/abs/2109.13329, a 20-page paper.
(Presumably that team's work was done independently.)

* The talk goes beyond this, notably by exploiting the full S-unit
group for larger S. This is much bigger than gluing together the
p-unit groups across p, and finds much shorter vectors as n grows,
as the talk's publicly verifiable pi-digit examples illustrate.

The full story will occupy many more pages---but the central mistake
that Dr. Ducas, Dr. Pellet-Mary, and Dr. Peikert made has already been
debunked in detail by the first paper. If their erratum requires more
time to issue, that's totally fine, but I would ask Dr. Peikert to (1)
stop complaining about the number of weeks required for full writeups of
massive results and (2) stop pretending that the central dispute hasn't
been addressed.

Finally, given how glaring the gap is between

* the clear highlighting of the topic of dispute in the statement by
Dr. Ducas and Dr. Pellet-Mary dated 24 Aug 2021 07:45:56 -0700 and
in the email from Dr. Peikert dated 24 Sep 2021 21:30:11 -0400

and

* the way the history is misrepresented in Dr. Peikert's email dated
26 Oct 2021 08:01:11 -0400,

it probably isn't necessary to address Dr. Peikert's further attempts
to impugn the research results, but I nevertheless plan on doing so in
due time.

---Dan


Christopher J Peikert writes:
> Subject: Re: [pqc-forum] S-unit attacks
> From: Christopher J Peikert <cpei...@alum.mit.edu>
> Date: Fri, 24 Sep 2021 21:30:11 -0400
> To: "D. J. Bernstein" <d...@cr.yp.to>
> Message-ID: <CACOo0QhO7gQqufic9F5W7UPr...@mail.gmail.com>
>
> Hi Dan, this is regarding your talk from five weeks ago, which concluded:
> "Heuristics imply [Hermite factor] ≤ n^{1/2+o(1)} in time exp(n^{1/2+o(1)}" for
> cyclotomic Ideal-SVP.
>
> In all this time, have you found any way to substantiate this extraordinary
> claim?
>
> If so, how does it circumvent the difficulties raised by Léo Ducas & Alice
> Pellet-Mary, and will a paper be available soon?
>
> If not, shouldn't you announce a retraction on the PQC forum?
>
> This matter is too important to leave in limbo any longer, especially since
> NIST has requested comments by the end of October.
signature.asc

Christopher J Peikert

unread,
Oct 30, 2021, 5:41:37 PM10/30/21
to pqc-forum
Dan, in your long reply, I didn't see a straight answer to my question about the algorithmic claim from the end of your talk of 10+ weeks ago, namely:

   "Heuristics imply [Hermite factor] <= n^{1/2+o(1)} in time exp(n^{1/2+o(1)})" for cyclotomic Ideal-SVP.

As I see it, the central question is this: is this claim still in effect, and if so, when can the community expect to see it substantiated?

For such a remarkable claim, one would normally expect to see a research paper backing it up, and before so much time has passed.

Sincerely yours in cryptography,
Chris
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20211030205031.1754234.qmail%40cr.yp.to.

Leo Ducas

unread,
Nov 1, 2021, 7:10:25 AM11/1/21
to pqc-forum, cpei...@alum.mit.edu
For completeness: there is another similar thread on the matter, where I posted relevant answers.

All the best
-- Leo

=== REPOST ===

Dear Dan,

> Regarding this particular aspect of what has happened, I appreciate your
> acknowledging this as a breakthrough [...]

I'm flattered that you value so much our opinion. Alice being unavailable
at that time, I'll be answering on my own.

> The claims of contrary evidence boil down to mis-modeling S-unit
> lattices as random lattices; those claims have been disproven and
> should be withdrawn.

I am afraid that your counter-counter-analysis is not sufficient to dismiss
the counter-analysis we brought. Indeed, the following fact trivially
generalize to arbitrary x over the sphere, and only random y:

>> in dimension d, if x,y are uniform on spheres of radii a, b respectively
>> (with a > b), then the probability that y reduces x (i.e., ||x-y|| <
>> ||x||) is (1-(b/2a)^2)^{Θ(d)}.

This means, that the set S = {Log(u/v)} can be arbitrary, and only the
target t = Log(g) needs to have a random direction. The lower-bound on
the output length of the iterative slicer stands by union bound. The
spherical model for lattices is only necessary to prove upper-bound
on this output. In other word: the lattice being non-spherical can only
makes this algorithm worse, not better.

So, for the lower bound we brought up, the randomness of the target
suffices. Because the algorithm fails to reach the desired length for
random spherical t, it also fails in the worst case. And that's what
we are after when we talk about ideal-SVP. A relevant average-case
is also defined in [BDPW20] if you care.

In a more concrete sense, it seems that your mistake is a confusion
between SVP and CVP; or purely geometrically, between the minimal
distance and the covering radius. You brought up Z^n as a valuable
example. Yes, Z^n has a minimal distance of 1. However, it's covering
radius is still Θ(sqrt(n)). The average distance to Z^n of a random
point modulo Z^n is also Θ(sqrt(n)). In fact, the constant hidden in
the Θ is even worse for Z^n than it is for random lattice.

I hope this clarifies things.
-- Léo

[BDPW20]: Random Self-reducibility of Ideal-SVP via Arakelov Random Walks
Koen de Boer and Léo Ducas and Alice Pellet-Mary and Benjamin Wesolowski


Reply all
Reply to author
Forward
0 new messages