Page 1 of eprint 2019/725 claims that "CSIDH-512 does not achieve the
claimed 64 bits of quantum security" under the "plausible assumption
that quantum evaluation does not cost very much more than indicated by a
recent 'best case' analysis".
This claim is not justified. The paper's argument for this conclusion is
actually based on implausible assumptions. More precisely, the argument
relies on
(1) implausible assumptions not admitted in the claim,
(2) implausible assumptions mischaracterized as plausible, and
(3) one plausible assumption.
For example, the paper implausibly assumes low-cost quantum access to
RAM. Reasons to disbelieve this particular assumption appear, e.g., in
https://arxiv.org/pdf/1502.03450.pdf, and in the literature justifying
NIST's dismissal of BHT, and in the literature justifying SIKE's switch
to smaller key sizes. (Will 2019/725 be extended, I wonder, to excitedly
announce that SIKE round 2 doesn't meet its claimed security level?)
This is an example of an implausible assumption not admitted in this
claim. The claim refers only to assumptions regarding the cost of
"quantum evaluation" ("quantumly evaluating the CSIDH group action"),
and unjustifiably omits the paper's assumptions regarding the rest of
the algorithm, such as the assumption of low-cost quantum access to RAM.
Other sentences of 2019/725 state the assumption of low-cost quantum
access to RAM, but this doesn't justify omitting the assumption here,
and doesn't make the assumption plausible. Page 4 of the paper points to
a way to replace this (implausible) assumption by other (implausible)
assumptions, but this still doesn't justify omitting the assumption.
I'm not saying that the assumptions regarding the cost of "quantum
evaluation" are plausible. On the contrary, this paper makes an
implausible series of jumps between cost models---ignoring all of the
relevant literature, including warnings in the introduction of [BLMP19].
One of these jumps comes from 2019/725 actively misstating [BLMP19]'s
conclusions as follows:
[BLMP19] analyzed the concrete cost of quantumly evaluating the CSIDH
group action. For the CSIDH-512 parameters, they arrived at an
estimate of approximately 2^40 nonlinear qubit operations ...
The actual quote from [BLMP19] regarding a number close to 2^40 is
"nonlinear bit operations", which is not the same thing as 2019/725's
claimed "nonlinear qubit operations". As explained in textbooks and
reviewed in [BLMP19], one has to pay for reversibility, and for building
nonlinear bit operations from T-gates. 2019/725 doesn't give reasons to
think that these overheads can be eliminated in this context; I haven't
found anything in 2019/725 even acknowledging that the issue exists.
This particular gap between nonlinear bit operations and T-gates is at
most a factor 14. However:
(1) Despite 2019/725's overconfident claim of beating 64 bits, and
calculation of 56 bits, the author says that the paper "is not
affirmatively claiming a 2^56 quantum attack". It isn't clear
what exactly the gap below 64 bits is supposed to be, but clearly
the gap isn't large, so a few bits can destroy the conclusion.
(2) There's a huge additional cost for fault tolerance, and
presumably this will be an even bigger issue for isogeny
computations than it is for Grover attacks against AES, SHA-2,
etc. See the [BLMP19] quote from my previous message.
The wording of the claim's "plausible assumption" draws the reader's
attention to one fragment of this assumption, namely the gap between a
"best case" group distribution and a uniform distribution. This is where
[BS18] estimates a 2^2 slowdown. It's plausible that the actual slowdown
is even smaller. But the claim states a broader assumption than this,
incorrectly says that the broader assumption is plausible, and, as
explained above, unjustifiably omits another implausible assumption.
Christopher J Peikert writes:
> Time will tell how optimistic the assumptions are
Right now a reasonable reader who takes the sentence
Therefore, under the plausible assumption that quantum evaluation
does not cost very much more than indicated by a recent "best case"
analysis, CSIDH-512 does not achieve the claimed 64 bits of quantum
security, and it falls well short of the claimed NIST security level
1 when accounting for the MAXDEPTH restriction
from page 1 of 2019/725 has no idea that, _beyond_ the assumption stated
here, the paper is _also_ making an assumption of low-cost quantum
access to RAM.
Is it conceivable that the papers shredding this assumption are all
missing some amazing idea for how to build low-cost QRAM? Sure. But
2019/725 deceived this reader into thinking that this isn't in the list
of assumptions.
Appealing to the process of evaluating assumptions is rather strange as
a response to the problem of misinformation regarding which assumptions
are being used. This misinformation damages the process and needs to be
fixed.
Furthermore, the assumption that _is_ stated here is much broader than
the plausible part of it that's highlighted (the best-case-vs.-uniform
switch). The paper omits several caveats stated in [BLMP19], ignores
existing analyses such as
https://eprint.iacr.org/2016/992.pdf, and
makes an unsupported plausibility claim that actively deceives readers
regarding what's known on this topic.
> For implementing the quantum oracle, one reason for some optimism
> is given in the "Further Research" section: "because the c-sieve
> requires so few oracle queries (e.g., 2^16 for CSIDH-512), some
> immediate improvement may be obtainable simply by increasing the error
> probability of the oracle, from the 2^{-32} considered in [BLMP'19]."
This is a baffling response.
I'm saying that the published claim identified above misleads readers in
two ways: it hides implausible assumptions, and mischaracterizes other
implausible assumptions as plausible. This response is saying that the
claim of a security level below 64 bits for CSIDH-512 might be correct
for other reasons. How is this supposed to justify misleading readers
regarding the plausibility of the paper's assumptions?
Furthermore, anyone who actually reads [BLMP19] can see that, contrary
to what 2019/725 indicates, [BLMP19] systematically considers three
different error probabilities: 2^-1, 2^-32, and 2^-256. The fastest
algorithm in [BLMP19] is reported to use 106 iterations, 154 iterations,
and 307 iterations respectively; see page 29 of the paper. Given that
2^-1 uses almost 70% as many iterations as 2^-32, it's not reasonable to
hope for a big gap between 2^-16 and 2^-32.
Furthermore, [BLMP19] points to the software online for doing these
calculations. Simply running the "top2exact" script as per the
instructions shows that the probability reaches 2^-16 at 138 iterations,
which is 89.6% of the iterations used to reach 2^-32. In other words,
this "further research" was already done as part of [BLMP19], and
produces a difference in security levels of 0.11 bits. This is a rather
thin thread upon which to hang hopes of rescuing 2019/725's <64 claim.
-----
The rest of this message switches to the failure of 2019/725 to give
adequate credit to Kuperberg.
https://quics.umd.edu/events/hidden-shift-algorithms-20 clearly shows
that, before this paper, Kuperberg was not merely _considering_ but
publicly _recommending_ his 2011 algorithm, in particular in the context
of attacking isogenies:
These algorithms became more interesting when Childs, Jao, and
Soukharev showed that they yield a quantum algorithm to find
isogenies between elliptic curves. I will discuss my lesser known
second algorithm, which deserves more attention because it supersedes
my original algorithm as well as Regev's algorithm. The newer
algorithm has a better constant in the exponent, it is expensive only
in classical space and not quantum space, and it is tunable in
various ways.
Section 4.5 of Kuperberg's 2011 paper explains the critical steps in a
concrete analysis, and his first paper already explains how to build
simulators for this type of algorithm. Detailed tuning takes some work,
but 2019/725 will obviously lead readers to miscredit 2019/725 for more
fundamental work that is actually due to Kuperberg.
As a separate matter, I think it's fair to say that quite a few of us
learned tuning and analysis details from Kuperberg's talks beyond what
we learned from his 2011 paper. I'm skeptical of the idea that the
recent work on this topic was independent of this.
> > > While recent works have analyzed the concrete cost of the original
> > > algorithms (and variants) against CSIDH, there seems not to have been
> > > any consideration of the c-sieve.
> "Consideration" is clearly referring to "analyz[ing] the concrete cost,"
You could have written "there seems to have been no concrete analysis of
the c-sieve". Instead you switched to saying that there wasn't even any
"consideration" of the c-sieve. This broadens the clear literal meaning,
and the switch of wording suggests that this broadening was deliberate.
Perhaps this isn't what you meant. But, instead of trying to weaponize
arguable ambiguities as a way to evade responsibility for predictably
misleading readers into giving your work more credit than it deserves,
perhaps you could try making an effort to _eliminate_ ambiguities and
_prevent_ readers from being misled? I'm making this suggestion with all
due respect, and I hope you understand its constructive spirit.
I don't see where your paper is acknowledging that Kuperberg not only
_considered_ but also _recommended_ his 2011 algorithm to attack
isogenies. I also don't see where your paper is acknowledging the
overlap between its concrete analysis and the analysis that's already in
Kuperberg's papers.
---Dan