Damien Stehlé writes:
> 4. There is no known "everything-preserving" average-case to
> average-case reduction in either direction.
> To sum up, there is no known reduction between Kyber and NewHope,
> due to Item 4.
Actually, Item 4 understates the obstacles to a proof. Even if,
hypothetically, a reduction can simultaneously
* handle the average case,
* match the number of samples, and
* exactly preserve errors,
this _still_ wouldn't give a security proof saying that Kyber is at
least as strong as NewHope. It wouldn't work
* even if the KEMs are simplified down to the PKEs,
* even if the PKEs are simplified to ignore ciphertext compression,
* even if the PKEs are simplified to ignore error correction, and
* even if the number of samples in the cryptosystems is adjusted to
match what the reductions need,
contrary to what readers would expect from your message (and from
Peikert's message, and from NIST's message).
The big extra problem is NewHope's wide error distribution. This is not
something obscure: the wide error distribution plays a clear role in the
NewHope security analysis (see Section 4.1.1), and in the picture of
lattice security proofs more generally. The wide error distribution also
plays a clear role in the discrepancy between
* RLWE being more efficient than rank-2/3/4/... MLWE but
* NewHope being less efficient than Kyber.
The changes from NewHope to Kyber include switching to a much narrower
error distribution, adding up just 4 bits mod 3329 instead of 16 bits
mod 12289. The "modulus switching" proof technique is far too noisy to
compensate for this. The Kyber documentation doesn't claim otherwise.
Surely you agree that this change destroys any hope of a Kyber>=NewHope
proof---it's not just that there's no "known" reduction.
It's awfully tempting at this point to
* quote the portion of the Kyber documentation describing "very low
noise" as a much larger "threat" than something else, and to
* comment on the relative vulnerability levels of Kyber and NewHope
to _known_ hybrid attacks.
But diving into this tangent would be a distraction from the topic at
hand. Someone disputing a claim that X is provably at least as secure as
Y shouldn't be asked to demonstrate that X is less secure than Y. The
people claiming proofs---in this case, NIST---are responsible for
avoiding exaggerations of proofs in the first place. It isn't okay to
gloss over average-case issues, it isn't okay to gloss over sample
issues, it isn't okay to gloss over compression etc., and it isn't okay
to gloss over changes in the error distribution.
The bigger picture is that lattice-based cryptography is constantly
deceiving potential users regarding what has been proven. The worst part
of this is the bait and switch between
* constant advertising of worst-case-to-average-case theorems and
* switching, for deployment proposals, to efficient cryptosystems
for which the theorems (at best) say security >= 2^tiny.
Users are only occasionally warned that there are two different types of
lattice cryptosystems, the theorem type and the efficient type. The
switch from "theorem" to "efficient" includes, most importantly,
(1) taking distributions too narrow for the theorems and
(2) taking dimensions too small for the theorems.
Effect #1 is illustrated by NewHope disappearing in favor of Kyber---
the supposed upgrade from RLWE to MLWE doesn't compensate for the loss
of error distribution. Effect #1 is also illustrated by your own
proposal of wide NTRU distributions (used in a round-1 NISTPQC
submission, exactly for its worst-case-to-average-case story)
disappearing in favor of narrow distributions.
Effect #2 is illustrated by _all_ of the NISTPQC lattice submissions.
This fact tends to be buried behind details omitted from asymptotic
theorem statements, but this fact is adequately documented in the
literature (starting with
https://eprint.iacr.org/2016/360.pdf) and
isn't subject to any scientific dispute.
I'm not saying that giving up the theorems is a bad idea. (The theorems,
when applicable, provide far less security assurance than commonly
believed; consume resources that would be better applied to other
risk-reduction techniques; and seem incompatible with NIST's pursuit of
performance.) But we have to warn users that worst-case-to-average-case
theorems simply do not apply to lattice systems proposed for deployment.
With this background, it's particularly striking to see NIST stating
that NTRU "lacks a formal worst-case-to-average-case reduction". This
doesn't make any sense as a comment unless NIST believes, incorrectly,
that some of the lattice submissions _have_ a worst-case-to-average-case
reduction. An incorrect belief in a worst-case-to-average-case reduction
for (say) Kyber would necessarily include
(1) missing the error-distribution obstacle to Kyber proofs and
(2) missing the dimension obstacle to Kyber proofs.
Missing the error-distribution obstacle, and downplaying "minor" issues
such as the number of samples, would also make a Kyber>=NewHope proof
sound plausible. Sure enough, this imaginary security proof, concluding
that "the security of NewHope is never better than that of KYBER", is
the centerpiece of NIST's NewHope commentary.
NIST promised more transparency after Dual EC. I don't understand why
NIST keeps soliciting _private_ NISTPQC input rather than asking for the
whole evaluation to be done in public. (I also said this on the record
before round 3 was announced.) This isn't just an NSA issue; anyone who
has served on program committees sees that some scientists use privacy
as a shield for exaggeration. I don't see NISTPQC procedures to
compensate for overconfidence and confirmation bias; I don't see where
NIST asked for public feedback regarding these NTRU-vs.-something and
NewHope-vs.-Kyber provable-security claims before the assessments
appeared in the latest report; and I don't see how similar NIST errors
in round 3 are going to be corrected before they're used for decisions.
---Dan