subexponential quantum uSVP?

661 views
Skip to first unread message

D. J. Bernstein

unread,
Sep 10, 2015, 9:51:41 AM9/10/15
to cryptanalyti...@googlegroups.com
There's a Dagstuhl workshop this week on Quantum Cryptanalysis. Alex May
has just given a talk on joint work in progress with Elena Kirshanova
whose tentative conclusion is "Theorem(?): There exists a L[2/3]-alg.
for computing poly(n)-uSVP". The precise complexity looks like
2^{n^(2/3+o(1))}, roughly 2^{O(n^(2/3)(log n)^(1/2))}. If this works
then it's obviously a huge reduction in the asymptotic uSVP cost.

The strategy is Regev's reduction to DCP, but with LLL replaced by BKZ,
and then Kuperberg's quantum algorithm for DCP. Most of the questions
from the audience were about whether errors can be controlled in the
quantum states produced by Regev's reduction; sounds like this is also
the reason for the "?" on the theorem. I asked about doing even better
by reusing this uSVP algorithm recursively inside BKZ; Alex said that
this doesn't work since the uSVP promise disappears for the projected
lattices where BKZ is recursively computing SVP.

---Dan

Oded Regev

unread,
Sep 10, 2015, 3:07:00 PM9/10/15
to cryptanalyti...@googlegroups.com
Thanks for letting us know about it, Dan. I am of course curious to see more details. Let me just say that the approach sounds suspiciously similar to things that some people have tried before. In more detail, the reduction to DCP as presented in my 2002 paper is quite lossy, as it maps n-dimensional instances of uSVP to DCP instances over a group of order roughly 2^{n^2}. However, as it turns out, it is possible to make the reduction much more efficient, and obtain DCP instances over groups of order roughly 2^{nlogn}. This for instance can be done by starting from the BDD problem underlying LWE instead of an arbitrary instance of uSVP. I believe that this is already stronger than what one gets from the BKZ improvement suggested by Kishanova and May. 

The bottleneck is actually in the application of Kuperberg's algorithm. Kuperberg's algorithm (on groups of order 2^n) requires 2^{\sqrt{n}} *error-free* DCP samples. The algorithm is quite fragile -- one bad sample can break the whole algorithm. Unfortunately, the reduction in the 2002 paper produces DCP superpositions that are erroneous with probability 1/poly(n). One can try to reduce the error probability, but that comes at the price of a worse approximation factor for uSVP. The resulting tradeoff is no better than what one can already get classically. 

-- Oded


--
You received this message because you are subscribed to the Google Groups "Cryptanalytic algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cryptanalytic-algo...@googlegroups.com.
To post to this group, send email to cryptanalyti...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cryptanalytic-algorithms/20150910135128.20634.qmail%40cr.yp.to.
For more options, visit https://groups.google.com/d/optout.

D. J. Bernstein

unread,
Sep 11, 2015, 5:11:02 AM9/11/15
to cryptanalyti...@googlegroups.com
Oded Regev writes:
> Let me just say that the approach sounds suspiciously similar to
> things that some people have tried before.

Really? What's the URL?

What I see in Kuperberg's 2013 paper is a claim that his algorithm
doesn't help for lattice problems since the reduction is too inefficient
("The fundamental reason is that we have trouble competing with
classical sieve algorithms for these lattice problems"), but this is
what Kirshanova and May are addressing.

> Kuperberg's algorithm (on groups of order 2^n) requires 2^{\sqrt{n}}
> *error-free* DCP samples.

Alex's talk prompted several of us to discuss this in detail, and at
this point we all seem to be in agreement that your summary of
Kuperberg's algorithm is wrong. The algorithm is actually much more
flexible, and remains subexponential time even if 1/poly of the inputs
are corrupted---exactly what the Kirshanova--May idea needs.

Specifically, the final qubit measured in the algorithm is a combination
of 2^L input qubits, where L is the number of levels in the algorithm;
the Kuperberg cost looks like 2^(n/L) for L in o(sqrt(n)). If the
original qubits are garbage with probability epsilon then a combination
of 2^L is correct with probability (1-epsilon)^2^L. This is still useful
for 2^L as large as 1/epsilon, or even as large as n^{1-o(1)}/epsilon,
with the final errors corrected by a subexponential amount of voting.

If I'm understanding the asymptotics correctly, the exponent improvement
that Kirshanova and May obtain for poly-uSVP (compared to Laarhoven's
state-of-the-art pre-quantum sieving algorithms) is arbitrarily large,
depending on the poly: better than Grover, better than recent
fourth-root quantum algorithms for other problems, etc.

---Dan

Oded Regev

unread,
Sep 11, 2015, 10:56:35 PM9/11/15
to cryptanalyti...@googlegroups.com
Hi Dan,

In your first post you mentioned the possibility of a subexponential 2^{n^(2/3+o(1))} time quantum algorithm. If I understand correctly, now you are only hoping for a 2^{cn} time quantum algorithm for a small constant (or slightly subconstant) c, right? This is a far more modest result, but would still be very nice.

I tried to verify the asymptotics but I don't yet see how you get such a running time. So it would be good to see more details. Specifically, I am curious how you're doing the reduction to DCP. Am I correct in assuming that you no longer use BKZ and instead use LWE as I suggested?


Alex's talk prompted several of us to discuss this in detail, and at
this point we all seem to be in agreement that your summary of
Kuperberg's algorithm is wrong. The algorithm is actually much more
flexible, and remains subexponential time even if 1/poly of the inputs
are corrupted---exactly what the Kirshanova--May idea needs.


What I meant to say is of course that Kuperberg's algorithm requires all the samples that it uses to be correct. Obviously those samples that are never used in the quantum combining procedure can be corrupt.  Sorry for not making this clear. 

-- Oded

Message has been deleted

Ryan

unread,
Sep 12, 2015, 1:41:08 AM9/12/15
to Cryptanalytic algorithms
I'm not the smartest guy in the room, so how would one achieve 128-bit security with lattice cryptosystems?

D. J. Bernstein

unread,
Sep 13, 2015, 2:46:02 PM9/13/15
to cryptanalyti...@googlegroups.com
My understanding is that the poly-uSVP asymptotics, using BKZ plus a
proper understanding of Kuperberg, are not as good as the tentative
"Theorem(?) ... L[2/3]" but are nevertheless good enough to contradict
Oded's "no better than what one can already get classically". I don't
know whether the LWE asymptotics are even better. I'm also under the
impression that Kirshanova and May are writing something, and pestering
them might not be the best way to help them finish their writeup. :-)

Oded Regev writes:
> What I meant to say is of course that Kuperberg's algorithm requires
> all the samples that it uses to be correct.

Out of all the input qubits, 2^L are eventually combined into a qubit
that's measured, and any error in those 2^L qubits should be expected to
randomize the output---but an error rate of 1/2^L (or somewhat higher)
is still tolerable, since one can repeat the algorithm and count votes.

Of course, having a good chance of obtaining a correct result in this
way (for one bit, and then for all bits) does require many uncorrupted
votes, and each uncorrupted vote consumes 2^L uncorrupted input qubits.
But the big picture is that the algorithm is tolerating many corrupted
qubits at unknown positions, and I don't see how you can describe this
as the algorithm requiring "all the samples that it uses to be correct".

Furthermore, it's certainly not true that the algorithm "requires
2^{\sqrt{n}} *error-free* DCP samples"; L doesn't have to be chosen
anywhere near as large as sqrt(n).

---Dan

Christopher J Peikert

unread,
Sep 14, 2015, 12:34:33 PM9/14/15
to cryptanalytic-algorithms
If I'm following all this correctly:

* Last Thursday, the initial thought was that there is a quantum algorithm to solve poly(n)-uSVP in roughly 2^{n^{2/3}} time.  Oded's reduction to DCP is at the heart of the approach.

* On Friday, following Oded's message ("The bottleneck is actually in the application of Kuperberg's algorithm [not the DCP group size]"), the hoped-for runtime was downgraded to 2^{cn}, where the "constant" c decreases as the poly(n) approximation increases.

* Oded still doesn't see how to obtain such asymptotics, and there's not yet any public document with details or analysis.  So we are left to wait for one, or a statement that the attempt did not work out as hoped.

Given all this, let me suggest that it would be better not to announce potential attacks in this venue until there is a public document containing enough details that at least experts can check for plausibility.  If a "Theorem(?)" is indeed a Theorem, then there's no harm in waiting a few weeks for the details to be worked out properly.  But if not, then prematurely announcing the "Theorem(?)" to the entire Internet can only cause confusion and misunderstanding, which is surely not the purpose of this mailing list.

Chris

--
You received this message because you are subscribed to the Google Groups "Cryptanalytic algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cryptanalytic-algo...@googlegroups.com.
To post to this group, send email to cryptanalyti...@googlegroups.com.

D. J. Bernstein

unread,
Sep 14, 2015, 8:48:58 PM9/14/15
to cryptanalytic-algorithms
Christopher J Peikert writes:
> let me suggest that it would be better not to announce potential
> attacks in this venue until there is a public document containing
> enough details that at least experts can check for plausibility.

My experience is that science moves forward much more quickly from
scientists promptly sharing their observations---with appropriate
indications of confidence levels, of course---than from people who adopt
an antisocial "hide everything until the paper is online" mentality.

Of course, if you don't want to participate in the community process,
you're free to hide in your corner, but you shouldn't have the gall to
criticize people who _are_ participating. Furthermore, you're crossing
the line into unethical behavior when you fail to accurately report what
people have said about their confidence levels.

> * Last Thursday, the initial thought was that there is a quantum
> algorithm to solve poly(n)-uSVP in roughly 2^{n^{2/3}} time.

Actually, the initial report included qualifiers such as "work in
progress ... tentative conclusion ... Theorem(?) ... If this works",
along with a summary of the main reason for the question mark. Do you
really not understand how "thought" fails to capture this?

> * On Friday, following Oded's message ("The bottleneck is actually in
> the application of Kuperberg's algorithm [not the DCP group size]"),
> the hoped-for runtime was downgraded

Actually, the core issue was already in the initial report, and the
analysis and quantification of the consequences were already in progress
at the workshop. As I said before, my understanding after this work is
that the results aren't as good as the tentative "Theorem(?) ... L[2/3]"
but are nevertheless good enough to contradict Oded's "no better than
what one can already get classically".

What I've checked in detail is that the main technical point in Oded's
message, namely his summary of Kuperberg's algorithm, is wrong---it's
valid only for an artificially limited version of the algorithm. I
remain curious what he was referring to when he said that the new work
on uSVP is "suspiciously similar to things that some people have tried
before"; I asked for a URL and didn't see an answer.

> * Oded still doesn't see how to obtain such asymptotics, and there's
> not yet any public document with details or analysis. So we are left
> to wait for one, or a statement that the attempt did not work out as
> hoped.

My own assessment is that the available information is adequate for an
informed reader to figure out pretty much the entire picture, so waiting
certainly isn't the only option. However, since this all started with an
announcement by Kirshanova and May, they're the obvious people to write
a paper---it's silly to duplicate effort. Please also note that, even if
they never write a paper, it will still be obligatory to give them
appropriate credit.

---Dan

Christopher J Peikert

unread,
Sep 16, 2015, 8:51:50 AM9/16/15
to cryptanalytic-algorithms
On Mon, Sep 14, 2015 at 8:49 PM, D. J. Bernstein <d...@cr.yp.to> wrote:
Christopher J Peikert writes:
> let me suggest that it would be better not to announce potential
> attacks in this venue until there is a public document containing
> enough details that at least experts can check for plausibility.

My experience is that science moves forward much more quickly from
scientists promptly sharing their observations---with appropriate
indications of confidence levels, of course---than from people who adopt
an antisocial "hide everything until the paper is online" mentality.

As I wrote, I am not asking for a completed paper, but merely that an Internet-wide announcement come with a basic outline of the algorithm and its analysis. In this case, it could be something like:

"Use BKZ with block size K, followed by Regev's reduction to generate DCP samples having error rate EPS on a group of order |G|; this works because REASON.  Apply Kuperberg's algorithm with L levels many times and take the majority answer.  Altogether this takes time T."

You have previously said what EPS and L are (and what T we might hope for), but have not specified the group order |G| or how it depends on EPS and/or K.  This is a critical piece of the picture: the runtime of Kuperberg's algorithm is at least 2^{log |G| / L}.

Actually, the core issue was already in the initial report, and the
analysis and quantification of the consequences were already in progress
at the workshop. As I said before, my understanding after this work is
that the results aren't as good as the tentative "Theorem(?) ... L[2/3]"
but are nevertheless good enough to contradict Oded's "no better than
what one can already get classically".

I haven't yet seen an indication of your confidence level about this further work.  Do all the qualifiers on the original "Theorem(?)" apply here as well, or is this a higher-confidence statement?

Sincerely yours in cryptography, Chris

Reply all
Reply to author
Forward
0 new messages