ROUND 2 OFFICIAL COMMENT: NewHope

789 views
Skip to first unread message

D. J. Bernstein

unread,
Jul 23, 2020, 5:32:45 AM7/23/20
to pqc-co...@nist.gov, pqc-...@list.nist.gov
https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8309.pdf has a new
provable-security argument concluding that "the security of NewHope is
never better than that of KYBER". (This is almost half of NIST's text in
that document regarding NewHope.)

I don't believe the argument. I'm filing this comment to request that
NIST spell out the argument in more detail for public review. (I'm also
skeptical regarding the conclusion of the argument, but that's a more
subtle issue.)

Section 7.3 of https://cr.yp.to/papers.html#latticeproofs observes that
"proofs do not reach all the way from one of the target KEMs to another,
or even from one of the target PKEs to another". As far as I know, this
remains the situation today, and is not changed by the subsequent paper
that NIST cites as part of its argument (nor do I see where that paper
claims to change the situation).

I understand that NewHope is not a round-3 candidate, and that NIST
could switch to other justifications for eliminating NewHope, but the
problems with this argument seem relevant to round 3 both procedurally
(where exactly did the public have an opportunity to review and correct
NIST's provable-security claim?) and content-wise (the security of MLWE
and Kyber will continue to be compared to other round-3 candidates, and
in any case should not be exaggerated).

---Dan
signature.asc

Damien Stehlé

unread,
Jul 24, 2020, 11:58:23 AM7/24/20
to pqc-forum, pqc-comments, Leo Ducas, Vadim Lyubashevsky
Dear Dan, everyone,

As far as we understand, the situation is as follows:

1. There is a worst-case to worst-case "everything-preserving" (i.e.,
noise, samples over Z_q, total dimension over Z) reduction from
Ring-LWE instances to Module-LWE ones,
2. There is an average-case to average-case reduction from Ring-LWE
to Module-LWE that is noise/total dimension preserving, but not
sample-preserving,
3. There is no known noise-preserving reduction of any kind going
the other way around (i.e., from Module-LWE to Ring-LWE),
4. There is no known "everything-preserving" average-case to
average-case reduction in either direction.

To sum up, there is no known reduction between Kyber and NewHope,
due to Item 4. However, Items 1 - 3 make it seem unlikely (at least to
us) that Module-LWE could be easier in practice for "natural" (e.g.,
uniform in bounded intervals) distributions that are not covered by the
reductions. This was one of the reasons why Module-LWE was chosen
for Kyber.

Best regards,
Damien, Léo, Vadim
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20200723093230.192131.qmail%40cr.yp.to.

Christopher J Peikert

unread,
Jul 24, 2020, 12:07:55 PM7/24/20
to Damien Stehlé, Leo Ducas, Vadim Lyubashevsky, pqc-comments, pqc-forum
On Fri, Jul 24, 2020 at 11:57 AM Damien Stehlé <damien...@gmail.com> wrote:
Dear Dan, everyone,

As far as we understand, the situation is as follows: ...


2. There is an average-case to average-case reduction from Ring-LWE
to Module-LWE that is noise/total dimension preserving, but not
sample-preserving,

Well, the reduction is even “sample preserving,” in that it maps one Ring-LWE sample to one Module-LWE sample.

However, Kyber reveals more Module-LWE samples than NewHope reveals in Ring-LWE samples. This is the why there is no known reduction between (the pre-FO versions of) Kyber and NewHope.

Sincerely yours in cryptography,
Chris

daniel.apon

unread,
Jul 24, 2020, 2:08:11 PM7/24/20
to pqc-forum, cpei...@alum.mit.edu, Leo Ducas, vadim....@gmail.com, pqc-comments, pqc-forum, damien...@gmail.com
Hi Dan, and all,

It takes us a little extra time to formulate a full response as an entire group, so this email may be slightly repetitive of what others have said just recently in this thread. In any case--

As you point out, most of the text in https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8309.pdf regarding NewHope references KYBER. The discussion there primarily aimed to explain the reasoning leading to the statement "NIST developed a slight but clear preference for KYBER and for low-rank MLWE schemes over RLWE schemes," particularly given (i) the selection of KYBER as a finalist in the 3rd Round, and (ii) the overwhelming similarities between NewHope and KYBER in general.

The underlying technical argument can be (and has been) stated many ways, but we found the peer-reviewed and publicly-published paper "Algebraically Structured LWE, Revisited" (TCC 2019) to have the technically-tightest and most general argument about the relationship between RLWE and MLWE.

This paper's proceedings version has been accessible to the public on the publisher's website at https://link.springer.com/chapter/10.1007%2F978-3-030-36030-6_1 since November 22, 2019, and it has been accessible to the public in pre-print form at https://eprint.iacr.org/2019/878 since August 1, 2019. All versions of the pre-print (August 1, 2019; September 21, 2019; and November 15, 2019) are accessible to the public at https://eprint.iacr.org/eprint-bin/versions.pl?entry=2019/878. For the sake of discussion, we may use the November 15, 2019 version of the paper, currently found at https://eprint.iacr.org/2019/878.pdf.

The technical matter relevant to the NewHope vs KYBER decision point is found in Section 6 "Reduction from O'-LWE to O-LWE^k" beginning on Page 15. Our interpretation is that for the case of power-of-2 cyclotomics, there is a reduction from Ring-LWE to Module-LWE that's nearly perfect. We list some features of this reduction:

-- The reduction runs in linear time.
-- The reduction scales up security by ring-dimension n times module-rank k.
-- The reduction is modulus-preserving.
-- The reduction is "almost" sample-preserving. (*)
-- The reduction is error distribution and error magnitude preserving in the "typical setting," where the coordinates of the errors are chosen i.i.d.

(*) The Module-LWE secret in KYBER is exposed to more samples (say, 3 or 4) than the Ring-LWE secret in NewHope (just 1). This appears to be inherent in the natural constructions, as the typical plain LWE cryptosystem 'gives away' a linear number of samples.

Due to this above caveat, there is not a formal proof that an attack on KYBER yields an attack on NewHope (ceteris paribus regarding exact concrete parameterization choices). However, there is also no proof or even clear reason to believe that this difference helps an attacker. (It would, for example, be surprising to conclude that "typical" plain LWE cryptosystems are fundamentally easier to attack than low-rank Module-LWE cryptosystems, primarily because the plain LWE cryptosystem releases more samples.)

We do note, separately, that we have meaningful, fairly tight reductions from power-of-2 cyclotomic Ring-LWE (under mildly differing parameters) to both NewHope and KYBER. So the conclusion drawn is that -- from the high-level perspective -- NewHope and KYBER are roughly equally good in terms of security. (We also note KYBER has a reduction from Module-LWE, but there is no known reduction from Module-LWE to NewHope.)

Therefore, in terms of Ring-LWE vs Module-LWE (and NewHope vs KYBER), we assess (with a slight preference) that the more conservative, more security-conscious choice is the less algebraically-structured option of Module-LWE. Since security is Job #1, our decision aligned with what we assess to be the higher-security option.

Finally, we note that the above discussion (modulo the sample-count caveat mentioned) primarily relates to our assessment of the relative security of IND-CPA NewHope and IND-CPA KYBER. There is the separate, outstanding question of whether the CPA-to-CCA transformation affects this picture. Speaking "informally," there doesn't yet seem to be a significant reason to believe so, although it is certainly possible.

--The NIST PQC team  

Damien Stehlé

unread,
Jul 25, 2020, 4:36:43 AM7/25/20
to Christopher J Peikert, Leo Ducas, Vadim Lyubashevsky, pqc-comments, pqc-forum


Le ven. 24 juil. 2020 à 18:07, Christopher J Peikert <cpei...@alum.mit.edu> a écrit :
On Fri, Jul 24, 2020 at 11:57 AM Damien Stehlé <damien...@gmail.com> wrote:
Dear Dan, everyone,

As far as we understand, the situation is as follows: ...

2. There is an average-case to average-case reduction from Ring-LWE
to Module-LWE that is noise/total dimension preserving, but not
sample-preserving,

Well, the reduction is even “sample preserving,” in that it maps one Ring-LWE sample to one Module-LWE sample.

However, Kyber reveals more Module-LWE samples than NewHope reveals in Ring-LWE samples. This is the why there is no known reduction between (the pre-FO versions of) Kyber and NewHope.

Sincerely yours in cryptography,
Chris

Dear Chris, all,

Well, "sample" is a bit overloaded here.

If one counts RingLWE samples rather than the number of ModuleLWE samples that a RingLWE sample contains, then the worst-case to worst-case reduction (Item 1) increases the number of samples.
One RingLWE sample contains a number of (correlated) ModuleLWE samples that is equal to the ratio of field degrees.

Best regards
Damien

D. J. Bernstein

unread,
Jul 25, 2020, 4:37:12 AM7/25/20
to pqc-co...@nist.gov, pqc-...@list.nist.gov
Damien Stehlé writes:
> 4. There is no known "everything-preserving" average-case to
> average-case reduction in either direction.
> To sum up, there is no known reduction between Kyber and NewHope,
> due to Item 4.

Actually, Item 4 understates the obstacles to a proof. Even if,
hypothetically, a reduction can simultaneously

* handle the average case,
* match the number of samples, and
* exactly preserve errors,

this _still_ wouldn't give a security proof saying that Kyber is at
least as strong as NewHope. It wouldn't work

* even if the KEMs are simplified down to the PKEs,
* even if the PKEs are simplified to ignore ciphertext compression,
* even if the PKEs are simplified to ignore error correction, and
* even if the number of samples in the cryptosystems is adjusted to
match what the reductions need,

contrary to what readers would expect from your message (and from
Peikert's message, and from NIST's message).

The big extra problem is NewHope's wide error distribution. This is not
something obscure: the wide error distribution plays a clear role in the
NewHope security analysis (see Section 4.1.1), and in the picture of
lattice security proofs more generally. The wide error distribution also
plays a clear role in the discrepancy between

* RLWE being more efficient than rank-2/3/4/... MLWE but
* NewHope being less efficient than Kyber.

The changes from NewHope to Kyber include switching to a much narrower
error distribution, adding up just 4 bits mod 3329 instead of 16 bits
mod 12289. The "modulus switching" proof technique is far too noisy to
compensate for this. The Kyber documentation doesn't claim otherwise.
Surely you agree that this change destroys any hope of a Kyber>=NewHope
proof---it's not just that there's no "known" reduction.

It's awfully tempting at this point to

* quote the portion of the Kyber documentation describing "very low
noise" as a much larger "threat" than something else, and to

* comment on the relative vulnerability levels of Kyber and NewHope
to _known_ hybrid attacks.

But diving into this tangent would be a distraction from the topic at
hand. Someone disputing a claim that X is provably at least as secure as
Y shouldn't be asked to demonstrate that X is less secure than Y. The
people claiming proofs---in this case, NIST---are responsible for
avoiding exaggerations of proofs in the first place. It isn't okay to
gloss over average-case issues, it isn't okay to gloss over sample
issues, it isn't okay to gloss over compression etc., and it isn't okay
to gloss over changes in the error distribution.

The bigger picture is that lattice-based cryptography is constantly
deceiving potential users regarding what has been proven. The worst part
of this is the bait and switch between

* constant advertising of worst-case-to-average-case theorems and

* switching, for deployment proposals, to efficient cryptosystems
for which the theorems (at best) say security >= 2^tiny.

Users are only occasionally warned that there are two different types of
lattice cryptosystems, the theorem type and the efficient type. The
switch from "theorem" to "efficient" includes, most importantly,

(1) taking distributions too narrow for the theorems and
(2) taking dimensions too small for the theorems.

Effect #1 is illustrated by NewHope disappearing in favor of Kyber---
the supposed upgrade from RLWE to MLWE doesn't compensate for the loss
of error distribution. Effect #1 is also illustrated by your own
proposal of wide NTRU distributions (used in a round-1 NISTPQC
submission, exactly for its worst-case-to-average-case story)
disappearing in favor of narrow distributions.

Effect #2 is illustrated by _all_ of the NISTPQC lattice submissions.
This fact tends to be buried behind details omitted from asymptotic
theorem statements, but this fact is adequately documented in the
literature (starting with https://eprint.iacr.org/2016/360.pdf) and
isn't subject to any scientific dispute.

I'm not saying that giving up the theorems is a bad idea. (The theorems,
when applicable, provide far less security assurance than commonly
believed; consume resources that would be better applied to other
risk-reduction techniques; and seem incompatible with NIST's pursuit of
performance.) But we have to warn users that worst-case-to-average-case
theorems simply do not apply to lattice systems proposed for deployment.

With this background, it's particularly striking to see NIST stating
that NTRU "lacks a formal worst-case-to-average-case reduction". This
doesn't make any sense as a comment unless NIST believes, incorrectly,
that some of the lattice submissions _have_ a worst-case-to-average-case
reduction. An incorrect belief in a worst-case-to-average-case reduction
for (say) Kyber would necessarily include

(1) missing the error-distribution obstacle to Kyber proofs and
(2) missing the dimension obstacle to Kyber proofs.

Missing the error-distribution obstacle, and downplaying "minor" issues
such as the number of samples, would also make a Kyber>=NewHope proof
sound plausible. Sure enough, this imaginary security proof, concluding
that "the security of NewHope is never better than that of KYBER", is
the centerpiece of NIST's NewHope commentary.

NIST promised more transparency after Dual EC. I don't understand why
NIST keeps soliciting _private_ NISTPQC input rather than asking for the
whole evaluation to be done in public. (I also said this on the record
before round 3 was announced.) This isn't just an NSA issue; anyone who
has served on program committees sees that some scientists use privacy
as a shield for exaggeration. I don't see NISTPQC procedures to
compensate for overconfidence and confirmation bias; I don't see where
NIST asked for public feedback regarding these NTRU-vs.-something and
NewHope-vs.-Kyber provable-security claims before the assessments
appeared in the latest report; and I don't see how similar NIST errors
in round 3 are going to be corrected before they're used for decisions.

---Dan
signature.asc
Reply all
Reply to author
Forward
0 new messages