Non-applicability of the CNRS patent to Kyber and Saber, and next steps

2,337 views
Skip to first unread message

Damien Stehlé

unread,
Oct 8, 2021, 3:34:09 PM10/8/21
to pqc-forum, Vadim Lyubashevsky
Dear all,
(This is a joint email on behalf of Vadim Lyubashevsky and Damien Stehlé)

We think that NIST has a hard decision to make with regard to
standardizing a lattice-based KEM, in particular due to intellectual
property claims by CNRS on Kyber and Saber. In the attached note, we
refute the claim and believe that it should be ignored. We also give
an alternative route, in case NIST still feels that this piece of IP
presents a credible litigation threat.

Summary:
~~~~~~~~
1. NIST has a seemingly hard decision to make in selecting a
lattice-based KEM because of the CNRS insistence that the
Gaborit&Aguilar-Melchor patent that it owns covers Kyber and Saber.
2. This claim was, however, contradicted by CNRS itself in the Keltie
vs CNRS opposition proceedings. Pursuing this claim now is
intellectual dishonesty on the part of CNRS and the scientists who may
be advising it.
3. Paying anything substantial to CNRS for the lifting of its claims
would constitute a moral hazard to future scientific work.
4. If NIST is still not sufficiently convinced of the inapplicability
of the CNRS claims, an option could be to standardize an NTRU-based
scheme.
5. If NIST decides to go this route, it should *not* take the NTRU
submissions in the finals -- there was very little variety in the few
submitted NTRU schemes and they can be significantly improved while
still following the NTRU framework.
6. If NIST wants NTRU, it should give the community about 6 months to
use the recent advancements, a lot of which were targeting
Ring/Module-LWE constructions, but are compatible with NTRU. This
should lead to better instantiation without significantly delaying the
final standard.
7. Running a 4-year standard selection process, having most of the
efficient lattice schemes disqualified because of illegitimate IP
claims, and standardizing the current NTRU version just because it’s
the only option, would be a very disappointing outcome to the whole
effort, and to the huge research efforts that have been made in this
area in the last 25 years.


Unfortunately, CNRS appears to continue to insist that the
Gaborit&Aguilar-Melchor patent applies to Kyber and Saber. We insist
that this claim lacks all merit, is in contradiction with the ethical
behavior expected from a first-tier research institution, and
whichever academic may be advising them to pursue this matter is
committing intellectual fraud. As far as we know, no neutral scientist
within the CNRS laboratories has been asked to give an opinion on this
matter. Yet, I [Damien] have contacted the CNRS multiple times and at
multiple hierarchy levels to suggest it to do so, over the last 15
months. The refusal of CNRS to discuss the scientific matters with
their own experts when there are strong objections to scientific merit
is unbecoming of a scientific entity.


We include a full scientific pdf on the inapplicability of the
Gaborit&Aguilar-Melchor patent to Kyber and Saber, in an attachment to
this email. It is also available at the following URL:
https://github.com/VadimLyubash/non-app-KyberSaber/blob/main/non-app.pdf


We summarize the main points here. In the legal proceedings in an
attempt to invalidate the patent, Keltie LLP. insisted that the main
claim of the patent (Claim 1 in the patent, i.e., the encryption
scheme) is mathematically incorrect as stated. CNRS insisted that it
is correct because it obviously only applies to commutative rings.
And furthermore, they would not impede any patents that would later
cover non-commutative rings. Their insistence on the obviousness of
the rings being commutative explicitly figured in the written decision
to uphold the patent. In order to claim applicability to Kyber and
Saber, they are now doing a complete U-turn and saying that their
patent also applies to non-commutative versions of the scheme. This
is the intellectual dishonesty that we are referencing. Furthermore,
and perhaps even more importantly, their attempt at broadening their
claim cannot succeed because the LPS paper
(https://eprint.iacr.org/2009/576), which predates this patent,
already presents a version of their encryption scheme over
non-commutative integer rings. Kyber and Saber differ from LPS only in
the choice of ring and clearly do not fit with the commutative ring
setup of the patent.


Given this, we are completely confident that their claim has no
scientific merit (of course we cannot say anything about legal merit
since we are not lawyers). Thus, if any entity chooses to pay anything
to CNRS, whether to license, or to buy out the patent, it will create
a very bad scientific precedent. Most of the contributions in
quantum-safe cryptography research, whether lattice-based, hash-based,
code-based, multivariate-based, or isogeny-based have been provided
IP-free through joint work by researchers from universities and
industry. It would be perverse, then, to pay a license fee for the one
thing that actually had no scientific contribution to the massive
effort of obtaining practical quantum-safe crypto. This would set a
terrible precedent for cryptographic research and could even make
academics and institutions reconsider their priorities, by encouraging
scientifically worthless output.


Nevertheless, NIST may decide that it wants to avoid any risk of
controversy or future litigation, and simply go with the third
finalist, NTRU. We think that an NTRU-based scheme is a valid
alternative, but we would strongly recommend that NIST not choose any
of those that are in the finals. Instead, if it chooses to go the
NTRU route, it should give the community around 6 months to come up
with a (much) better instantiation.


Out of the 17 submissions of KEMs based on structured lattices, 14
were Ring/Module-LWE based, and only 3 were NTRU-based. Two of the
NTRU ones have merged, and the merged scheme progressed to the finals
with the team seeming to favor the NTRU-HRSS version. The third
submitted scheme, NTRUPrime, is an alternate. There was very little
variety in the NTRU submissions and it appears that they advanced
merely because they were NTRU-based, despite having some questionable
design decisions. We also believe that the NTRU submissions received
comparatively little scrutiny compared to those based on Ring and
Module LWE.


The two decisions made by NTRU-HRSS that hamper its performance and
security are:
1. Having a large modulus in order to not have any decryption errors.
2. Using a cyclotomic ring, but not one that supports the number
theory transform (NTT).


We now elaborate on these two points:

The probability of a decryption error (i.e., the probability that a
properly created ciphertext is incorrectly decrypted) is a
*statistical* term. Setting it to something less than 2^-200 is
already being quite paranoid, while insisting that it is 0 is a
gimmick that hurts performance and security. In order to have 0
decryption error, one needs to set the working modulus significantly
higher (around 3X for the usual parameters) than if one were to aim
for a 2^-200 security error. This leads to a loss of around 20-30
bits of security against lattice attacks, which actually might be
useful to have in case better algorithms are developed. Furthermore,
a large modulus is particularly concerning for NTRU because recent
results on the NTRU problem stress that it is very different from
(Plain, Ring, Module)-LWE in that the cost of solving it decreases
much faster when the working modulus q increases. The bottom line is
that NTRU is a unique-SVP instance in rank-2 module lattices:
increasing q (while keeping the secret key of the same size) increases
the unique-SVP gap, which weakens the problem. While we do not
necessarily believe that one should throw away NTRU schemes due to
this line of attacks, one should certainly be wary of their existence.
And setting the modulus so as to get closer to the region where these
attacks work for no apparent gain is a very questionable decision.

The second choice that NTRU-HRSS made that we believe should be
re-examined is to use cyclotomic rings that do not support the NTT
algorithm. One can argue as to whether or not cyclotomics are weaker
than other rings (there is currently no evidence as to cyclotomics
being weaker or stronger), but if one uses cyclotomics, one might as
well use ones that lead to fast algorithms. This is particularly
important for NTRU, because key generation involves a division. By
choosing a proper cyclotomic ring and allowing negligible decryption
errors, one can have a scheme that is 15X faster (key exchange
round-trip) and 15% smaller (see https://eprint.iacr.org/2021/1352).
While it is certainly to be expected that the state of the art
progresses during the standardization process and the chosen standard
is thus behind the state of the art, being 15X behind in speed is a
bit much, and will probably result in requiring to support some more
efficient version down the line (e.g., look at all the curves that
need to be supported). In fact, the NTRU-HRSS scheme was behind the
state of the art when it was submitted -- it seems more of a showcase
for efficient multiplication / division in rings that do not support
NTT rather than an attempt at an optimal NTRU-based scheme. Just as
a simple example, it would have been quite easy to convert NewHope
into an NTRU-based scheme, and it would have been at least an order of
magnitude faster than NTRU-HRSS. But the NewHope submitters chose not
to do this because there is no security/performance/implementation
reason to prefer an NTRU NewHope to NewHope. This is the same reason
that most other submitters chose not to create NTRU versions of their
Ring/Module-LWE schemes -- there was no scientific reason to do so.


If NIST does choose to go the NTRU route, it should realize that the
goal of this standardization process is not to run a competition, but
rather to get the best scheme at the end. If external matters, like
the most appropriate schemes getting disqualified for unscientific
reasons, plays a role in the outcome, then the purpose of the process
has not been served. Should such a disqualification occur, NIST
should then ask the community to create an NTRU-based scheme based on
the best and most secure current techniques. Given what we have
learned in the last few years, we should be as confident about an
NTRU-type scheme created in a few months by taking components from
other schemes, as we are about the current finalists. There have not
been any attacks on lattice schemes with sensible parameter choices,
and the only issues that came up were tangential to the actual
schemes. They involved either implementation bugs, or issues such as
improper domain separation, parts not being constant time, etc. And
given that the current NTRU scheme has yet to present a level-5
parameter set, there is no reason to assume that it will be less
error-prone. In short, it makes sense to get it right the first time.

Vadim and Damien
non-app.pdf

D. J. Bernstein

unread,
Oct 8, 2021, 10:35:18 PM10/8/21
to pqc-...@list.nist.gov
The most important error in this non-applicability argument was already
pointed out in my message dated 21 May 2021 11:02:59 +0200. See quote
below. This isn't addressed in the new "full scientific PDF".

The accompanying don't-standardize-NTRU message similarly omits
discussion of known counterarguments: there's no discussion of the
security risks of FO derandomization in LPR, of the security risks of
the extra samples released by LPR, of https://eprint.iacr.org/2021/999,
etc. This strikes me as grossly unfair to the NTRU submission.

(The NTRU Prime submission is agnostic to this, supporting both options,
because ongoing analysis of the security risks of both options still
hasn't made clear which one is safer. The message I'm commenting upon
incorrectly lumps NTRU Prime together with NTRU. See the NTRU Prime FAQ
for discussion of why NTRU Prime uses the NTRU name.)

---Dan


D. J. Bernstein writes:
> As for content, details are important, and the quote you point to
> doesn't say what you claim it says. Most importantly, you ask whether
> "schemes are *non-commutative*", whereas the quote is asking whether a
> system is "basé sur des anneaux non-commutatifs". Here's an example to
> illustrate why the words matter:
>
> * In a case against Kyber, the plaintiff's lawyer points to the
> (\Z/q)[x]/(x^256+1) in the Kyber specification, pulls out an expert
> witness saying that this is a commutative ring, points to the
> advertising of this ring as defining the "framework" for Kyber,
> etc., and concludes that Kyber is _based_ on a commutative ring.
>
> * The defendant says "but there are also matrices on top of this so
> Kyber is non-commutative". Even if everyone agrees with some
> definition concluding that the scheme _is_ non-commutative, how is
> this supposed to contradict the statement that the scheme is _based
> on_ a commutative ring?
>
> In the same scenario, if there's then an argument about what exactly
> "based on" means (which I doubt, since procedurally I don't expect 3.21
> to carry such weight), the plaintiff's lawyer will say that what really
> matters here is the efficiency coming from the commutative ring, and the
> plaintiff's expert witnesses will pull out performance numbers
> supporting this, and the court will accept this since performance
> numbers are easy to understand.
>
> Fundamentally, the reality is that Kyber etc. need polynomials for their
> efficiency. The use of polynomials inside noisy DH wasn't published
> before LPR---and the patent came two months before that. Before the
> patent there were less efficient noisy-DH systems that didn't use
> polynomials, and less efficient polynomial systems that didn't use noisy
> DH; neither of these will stop the patent from covering every noisy-DH
> system using polynomials.
>
> I understand that you're trying to draw the dividing line differently,
> saying that what matters isn't _using polynomials_ but rather _not using
> matrices_, so someone who combines polynomials with matrices will cross
> the line. But this line isn't forced by the patent wording (see my email
> dated 13 Dec 2020 18:37:43 +0100), isn't forced by the prior art, and
> doesn't make sense from an efficiency perspective. As I wrote before:
> "It's normal in patent cases for defendants to try to avoid a patented
> efficiency improvement by interpolating between the prior art and the
> efficiency improvement, and it's normal for the patentee to win."
signature.asc

daniel.apon

unread,
Oct 8, 2021, 10:50:04 PM10/8/21
to pqc-forum, D. J. Bernstein, pqc-...@list.nist.gov
Dear Dan, speaking for myself:

Do you believe that the ring of integers \Z (cf. use in Regev's LWE scheme from 2004) is commutative?

Cheers,
--Daniel

daniel.apon

unread,
Oct 8, 2021, 10:55:46 PM10/8/21
to pqc-forum, daniel.apon, D. J. Bernstein, pqc-...@list.nist.gov
Dear Dan, speaking for myself:

Do you believe that degree-0 polynomials are polynomials?

Cheers,
--Daniel

Gustavo Banegas

unread,
Oct 9, 2021, 2:17:06 AM10/9/21
to pqc-...@list.nist.gov
Dear all, (speaking for myself)

It is funny that those two messages appear two days after Dustin's email
about
"outside the bounds of fair play". Let me recall that message:
"
We will do our best to set a standard of good behavior by example.  The
goal of creating secure post-quantum standards for the future should
remain the primary focus of thepqc-forum.  We have always encouraged
interested parties to ask questions, make comments on the submissions,
share their relevant work, etc. in a civil and constructive way." 

All the best,

Gustavo

Vadim Lyubashevsky

unread,
Oct 9, 2021, 3:51:46 AM10/9/21
to pqc-...@list.nist.gov
On Sat, Oct 9, 2021 at 4:35 AM D. J. Bernstein <d...@cr.yp.to> wrote:
>
> The most important error in this non-applicability argument was already
> pointed out in my message dated 21 May 2021 11:02:59 +0200. See quote
> below. This isn't addressed in the new "full scientific PDF".

It is very much addressed. You claim that the phrase "based on
non-commutative rings" by itself is vague as to what "commutative
rings" is referring to. But this is established by CNRS in the first
excerpt we cited that begins with "Commutative Character". The
statement "the used ring is commutative" is establishing that the ring
is where all the elements (e.g. S_A,P_A,S_B,etc.) of their encryption
scheme reside. This is the definition that is then consistently used
in many places throughout the proceedings with CNRS also stating that
their claim exclusively talks about 1 x 1 matrices of ring elements
(see second excerpt). The person issuing the ruling also used the
definition in this way. Thus their promise that the patent does not
include constructions "based on non-commutative rings" is quite clear
in that it excludes Kyber/Saber.

As to whether what they stated in the proceedings carries any weight,
I of course don't know. One would hope that there is some legal
reason why stating "X" to defend your patent would preclude you from
stating "not X" when trying to extend it.

> The accompanying don't-standardize-NTRU message

We did not say "don't standardize NTRU" -- we said "don't standardize
NTRU as is".

Vadim
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20211009023503.59044.qmail%40cr.yp.to.

Christopher J Peikert

unread,
Oct 9, 2021, 9:25:27 AM10/9/21
to Vadim Lyubashevsky, pqc-forum
On Sat, Oct 9, 2021 at 3:51 AM Vadim Lyubashevsky <vadim....@gmail.com> wrote:
On Sat, Oct 9, 2021 at 4:35 AM D. J. Bernstein <d...@cr.yp.to> wrote:
>
> The most important error in this non-applicability argument was already
> pointed out in my message dated 21 May 2021 11:02:59 +0200. See quote
> below. This isn't addressed in the new "full scientific PDF".

It is very much addressed. You claim that the phrase "based on
non-commutative rings" by itself is vague as to what "commutative
rings" is referring to. But this is established by CNRS in the first
excerpt we cited that begins with "Commutative Character". The
statement "the used ring is commutative" is establishing that the ring
is where all the elements (e.g. S_A,P_A,S_B,etc.) of their encryption
scheme reside. This is the definition that is then consistently used
in many places throughout the proceedings...

This is correct. And these restrictions are imposed not just from the proceedings, but by the plain text of the patent itself [link] (emphasis added):
  • "two primary elements X_A and Y_A belonging to a ring R",
  • "the ring R having addition and multiplication",
  • "f being an internal composition law associating with any element XI of the ring R, another element f(XI) of the ring R...",
  • "... and having the property that, for any pair of elements XI and YI of R, such that XI and YI have a norm that is small relative to the elements f(XI) and f(YI), then XI.f(YI) - YI.f(XI) has a small norm".
Thus their promise that the patent does not
include constructions "based on non-commutative rings" is quite clear
in that it excludes Kyber/Saber.

Indeed. And even before the promise, it was clear that the patent's text doesn't cover Kyber/SABER. The promise further cements it in the patentee's own words.

The patent's instantiations are all for commutative rings R. This commutativity enables one to give an "internal composition law" f having the property stated in the final bullet above.

For non-commutative rings R like matrix rings, it's unclear how to obtain such a "law" f. Indeed, only trivially insecure examples like f(X)=0 have ever been given (outside the patent).

(As the proceedings show, this is one reason why the patentee was forced to explicitly restrict its claims to commutative rings R: otherwise the patent would be too broad, since it would cover non-commutative rings for which it gave no instantiations of its claims.)

In any attempt to make Kyber/SABER fit under the patent's claims, the elements X_A, Y_A, S_A, S_B, and others would have to be vectors/matrices of polynomials, and the ring R would have to be a non-commutative matrix ring. For such a ring, no suitable "law" f is known, much less one that casts Kyber/SABER as following the steps and equations required by the patent.

(The lack of any suitable "law" at all for these rings means that Kyber/SABER do not just make some insignificant tweak to avoid the patent, but must work in a fundamentally different way that's not contemplated by the patent.)

More specifically, Kyber/SABER don't calculate "syndromes" using the equations SA = XA + f(YA)SB = XB + f(YB), and they don't calculate "intermediate values" PA = YA.SB = YA.XB + YA.f(YB), PB = YB.SA = YB.XA + YB.f(YA), as required by the patent's "characterized in that" clause. Instead, they use fundamentally different equations and "laws" that don't need commutativity, and work for the non-commutative case.
 
As to whether what they stated in the proceedings carries any weight,
I of course don't know.  One would hope that there is some legal
reason why stating "X" to defend your patent would preclude you from
stating "not X" when trying to extend it.

One would also expect a legal reason why the plain restrictions of the patent's own text would preclude extending it far beyond that text.

Sincerely yours in cryptography,
Chris

daniel.apon

unread,
Oct 10, 2021, 2:10:00 AM10/10/21
to pqc-forum, gustavo
Hi Gustavo (again, speaking in my personal capacity),

If there is an alternate interpretation that is sensible, I am genuinely trying to understand that such an interpretation exists, and what the reasoning might be.

I consider myself an interested party that can ask questions to try to understand, too.

--Daniel

D. J. Bernstein

unread,
Oct 10, 2021, 10:33:36 AM10/10/21
to pqc-...@list.nist.gov
Vadim Lyubashevsky writes:
> We did not say "don't standardize NTRU" -- we said "don't standardize
> NTRU as is".

To clarify, when I wrote "NTRU", I meant the NTRU submission as is, not
some undefined cloud of KEMs. (I commented separately on NTRU Prime.)

The starting message in the thread appears to be predicated upon the
idea that Kyber is an advance over NTRU. The reader is even led to
believe that Kyber is the result of "huge research efforts that have
been made in this area in the last 25 years" and that NTRU isn't. The
message doesn't mention the amply documented paths from research into
the NTRU submission. The message comments at length on one direction of
NTRU security risks without mentioning recent literature claiming to
quantify the exact boundary of those risks. The message doesn't mention
literature giving reasons to believe that Kyber is _more_ risky than
NTRU: e.g., it doesn't mention the ~100 bits of OW-CPA provable-security
loss from Kyber needing FO derandomization. Et cetera.

Can patent threats eliminate otherwise attractive submissions? Yes. Is
the only rationale for NTRU the lack of patent threats? Certainly not.
The literature on this should be properly described.

> You claim that the phrase "based on non-commutative rings" by itself
> is vague

It's not that simple, and at this point people can very easily see the
oversimplification by comparing the followup messages on pqc-forum to
the six-paragraph quote in my 9 Oct 2021 04:35:03 +0200 message.

A review of past pqc-forum postings shows quite a bit of information
regarding (1) the procedures that are used in patent courts, (2) how
they're biased towards patent holders, and (3) what they mean for this
particular point. I don't see any of this---in particular, the portions
addressed in the quote---addressed in this "full scientific PDF". Surely
a scientific analysis of patent "validity" etc. should begin by
reviewing the definitions of those terms.

People are constantly surprised by patent holders winning court cases.
Based on >30 years of watching online patent discussions, I would say
that the surprise comes primarily from people not knowing the rules of
the game---not even the basic fact that there are different rules used
for analyzing validity and for analyzing infringement, never mind the
question of what those rules are.

> One would hope that there is some legal reason why stating "X" to
> defend your patent would preclude you from stating "not X" when trying
> to extend it.

Yes, of course, and some of the mechanisms for this have been covered
previously on pqc-forum. The details of these mechanisms matter, and
again are not addressed in the PDF.

---Dan
signature.asc

Vadim Lyubashevsky

unread,
Oct 10, 2021, 2:57:34 PM10/10/21
to pqc-forum
Dear all,

> On Sun, Oct 10, 2021 at 4:33 PM D. J. Bernstein <d...@cr.yp.to> wrote:
>
> Vadim Lyubashevsky writes:
> > We did not say "don't standardize NTRU" -- we said "don't standardize
> > NTRU as is".
>
> To clarify, when I wrote "NTRU", I meant the NTRU submission as is, not
> some undefined cloud of KEMs. (I commented separately on NTRU Prime.)
>
> The starting message in the thread appears to be predicated upon the
> idea that Kyber is an advance over NTRU. The reader is even led to
> believe that Kyber is the result of "huge research efforts that have
> been made in this area in the last 25 years" and that NTRU isn't.

That was not the meaning of what's written. (In the following, I will
write NTRU to be the general framework from [HPS '98] and NTRU-HRSS to
be the NTRU instantiation in the finals.) Regardless of what you or I
think about the merits of Ring/Module-LWE schemes vs. NTRU schemes, 14
of the 17 submissions based on structured latices were some sort of
Ring/Module-LWE type scheme. There was some interesting variety in
there and the community had a chance to express their opinions about
what they prefer. A lot of those varieties could have been easily
ported to NTRU schemes.

Again, regardless of what you or I think, the designers of many of
those Ring/Module-LWE schemes decided that they did not see the need
to propose both NTRU and Ring/Module-LWE versions, even if it were
possible to have both. And this is what is meant by the research
effort in the area being wasted if NIST simply selects the NTRU-type
scheme that just happened to be there -- the majority of the new ideas
went to the Ring/Module-LWE submissions. If NIST had said that the
lattice submissions had to be the NTRU framework from [HPS '98], with
the only freedom being modifying the ring and noise distribution, we
would still have seen some interesting variety. And I am strongly of
the opinion that NTRU-HRSS would not be chosen for the finals out of
the other proposed schemes.

If NIST wants to go the NTRU route, choosing NTRU-HRSS seems like it
would only result in a temporary solution because people would
eventually end up using versions that are an order of magnitude faster
(or switch to Ring-LWE). It seems better to get it right the first
time, even if it takes a few extra months, rather than having to
support multiple schemes down the line.

> A review of past pqc-forum postings shows quite a bit of information
> regarding (1) the procedures that are used in patent courts, (2) how
> they're biased towards patent holders, and (3) what they mean for this
> particular point. I don't see any of this---in particular, the portions
> addressed in the quote---addressed in this "full scientific PDF".

This was information that only you provided; and from quick google
searches, I did not find your explanations about how the patent owner
is allowed to rewrite the patent to be that reliable. But in any case,
court procedure was not the point of our writeup (since we don't
pretend to know it). Our goal was to clearly explain the differences
between Kyber and the CNRS patent and then to explain what exactly
CNRS claimed in the Keltie vs CNRS litigation.

Based on what was in the litigation proceedings, the CNRS promise that
the patent does not extend to Kyber/Saber is scientifically
indisputable. Whether that's enough for a court, I don't know -- but
that's outside the scope of "scientific" and is left for lawyers to
judge (it would be really nice to hear some opinions about this from
actual lawyers on the list, by the way). Maybe a court can say that
3x3 matrix rings are commutative because, hey, close enough. If the
Indiana legislature came close to defining pi as 3.2,
(https://en.wikipedia.org/wiki/Indiana_Pi_Bill), then maybe anything
is indeed possible. But these are not elements that need to be
discussed in a scientific document whose purpose is scientific
objectivity.

Vadim

Watson Ladd

unread,
Oct 10, 2021, 3:20:33 PM10/10/21
to Vadim Lyubashevsky, pqc-forum
On Sun, Oct 10, 2021 at 11:57 AM Vadim Lyubashevsky
<vadim....@gmail.com> wrote:
>
> Dear all,
>
> > On Sun, Oct 10, 2021 at 4:33 PM D. J. Bernstein <d...@cr.yp.to> wrote:
> >
> > Vadim Lyubashevsky writes:
> > > We did not say "don't standardize NTRU" -- we said "don't standardize
> > > NTRU as is".
> >
<snip>
> Based on what was in the litigation proceedings, the CNRS promise that
> the patent does not extend to Kyber/Saber is scientifically
> indisputable. Whether that's enough for a court, I don't know -- but
> that's outside the scope of "scientific" and is left for lawyers to
> judge (it would be really nice to hear some opinions about this from
> actual lawyers on the list, by the way).

If CNRS wanted to clear the entire dispute up, they could do so by
licensing the patent on FRAND terms. That they haven't speaks volumes.

> Maybe a court can say that
> 3x3 matrix rings are commutative because, hey, close enough. If the
> Indiana legislature came close to defining pi as 3.2,
> (https://en.wikipedia.org/wiki/Indiana_Pi_Bill), then maybe anything
> is indeed possible. But these are not elements that need to be
> discussed in a scientific document whose purpose is scientific
> objectivity.
>
> Vadim
>
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAE158z97BwYt3qgV7wJYtFXurLkGBQQnA-9cntXKZbYOC%3DL%2Bjg%40mail.gmail.com.



--
Astra mortemque praestare gradatim

daniel.apon

unread,
Oct 10, 2021, 8:15:08 PM10/10/21
to pqc-forum, watso...@gmail.com, pqc-forum, vadim....@gmail.com
" If CNRS wanted to clear the entire dispute up, they could do so by
licensing the patent on FRAND terms. That they haven't speaks volumes."

Hi, could you clarify what you mean specifically?

> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+unsubscribe@list.nist.gov.

D. J. Bernstein

unread,
Oct 11, 2021, 2:32:50 AM10/11/21
to pqc-...@list.nist.gov
Vadim Lyubashevsky writes:
> Regardless of what you or I think about the merits of Ring/Module-LWE
> schemes vs. NTRU schemes, 14 of the 17 submissions based on structured
> latices were some sort of Ring/Module-LWE type scheme.

Google's well-known experiment with NewHope---a system that is overall
much less efficient than the NTRU submission, and that many people saw
still had room for improvement---triggered submissions of many minor
variations of NewHope (e.g., fancier error-correcting codes for smaller
ciphertexts), including submissions subsequently shown not to meet their
security claims, such as HILA5, LAC, and Round2. These security failures

* weren't stopped by any of the copy-and-paste security analyses,
* weren't stopped by the pervasive misinformation that LPR-type
systems are as hard to break as the NTRU submission, and
* reflect a much bigger problem, the lack of maturity of analysis.

Adding the extra complication of modules, as in Kyber, certainly doesn't
make the security analysis _more_ mature. The NTRU submission scores
better than Kyber in maturity of security analysis: it analyzes hybrid
attacks, for example, and structurally avoids FO-derandomization risks.

> And this is what is meant by the research
> effort in the area being wasted if NIST simply selects the NTRU-type
> scheme that just happened to be there -- the majority of the new ideas
> went to the Ring/Module-LWE submissions.

Such as the ideas that led to HILA5, LAC, and Round2? How exactly are
those beneficial for a proposed variant of the NTRU submission?

The beginning of this thread argued that NTRU should make two specific
modifications:

* Taking a smaller modulus with decryption failures. This certainly
isn't a new idea: on the contrary, the original NTRU paper focused
on this, as did most followups. It's only within the last several
years that the systems-security perspective of "Stay away from this
hard-to-analyze mess" has been influencing some designs. Advocating
a return to this aspect of the original NTRU papers doesn't at all
fit the "new ideas" narrative.

* Switching to an NTT-friendly ring. The bulk of the performance
benefit claimed at the beginning of this thread is from faster
divisions in NTT-friendly rings; the division algorithm, in turn,
is ancient and well known, although avoiding the traditional name
"deconvolution" can make it unnecessarily hard for readers to find
the literature. The NTRU Prime paper presents "NTRU NTT" as one of
the top-level options, and avoids it for explicit security reasons.

How exactly is skipping these two modifications supposed to be "a very
disappointing outcome to the whole effort, and to the huge research
efforts that have been made in this area in the last 25 years"?

If the answer is to point to new research on details of the algorithm
optimization and analysis: Sure, there's lots going on at this level---
and of course the NTRU submission has gained speed from this, again
contradicting the narrative here. For example, NTRU gained inversion
speed from https://eprint.iacr.org/2019/266; should it be arguing that
this research and followups on verification etc. would be "wasted" if
these polynomial inversions aren't deployed? Is "minimize the waste of
research time" one of the NISTPQC evaluation criteria?

Regarding the merits of the modifications proposed in this thread: Even
if one disregards the counterarguments in the literature and concludes
that these two modifications would be beneficial, let's look concretely
at what this means for the end user:

* The performance benefit from decryption failures, accounting for
the known attacks, is a small percentage improvement. The claim
that attack cost "decreases much faster when the working modulus q
increases" for these systems is an oversimplification; it ignores
known transition effects that are (1) described in every paper on
this topic, (2) easily seen in experiments, and (3) claimed to be
fully explained in a recent paper https://eprint.iacr.org/2021/999.

* The "being 15X behind in speed is a bit much" claim is comparing
320000 cycles to 19000 cycles for keygen+enc+dec. This ignores (1)
the fact that 220000 cycles disappear in applications reusing keys;
(2) literature showing how to reduce costs even in applications
that _don't_ reuse keys (see https://eprint.iacr.org/2021/826, to
appear at USENIX Security 2022); (3) the equivalent of millions of
cycles spent on communication, no matter which system one chooses.

As for the generic argument that there could be better attacks ("in case
better algorithms are developed", "concerning", "be wary", etc.), the
literature already shows that this is a two-edged sword. There could be
better attacks against Kyber that wouldn't apply to NTRU, for example
exploiting the extra samples, the much smaller base ring, or the FO
derandomization. A ring selected for speed can lead to better attacks,
as nicely illustrated by the quantum polynomial-time attacks against
Gentry's original STOC 2009 FHE system _for some rings_. Sometimes
concerns are resolved by proofs, but the proofs available in this area
are much less powerful than they're commonly portrayed as being.

> This was information that only you provided; and from quick google
> searches, I did not find your explanations about how the patent owner
> is allowed to rewrite the patent to be that reliable. But in any case,
> court procedure was not the point of our writeup (since we don't
> pretend to know it).

Thank you for reporting the background and procedures that entered into
these reliability assessments.

If you saw a PDF drawing conclusions about IND-CCA2 security, you'd
expect the authors to know the definition of IND-CCA2 security, right?
Why should the public put any stock in a PDF that purports to analyze
patent "validity" etc. without starting from how "validity" is defined?

> the CNRS promise that the patent does not extend to Kyber/Saber

This is not an accurate description of what they wrote.

> Maybe a court can say that 3x3 matrix rings are commutative because,
> hey, close enough.

Already answered below.
signature.asc

Vadim Lyubashevsky

unread,
Oct 11, 2021, 6:12:19 AM10/11/21
to pqc-forum
Dear all,

On Mon, Oct 11, 2021 at 8:32 AM D. J. Bernstein <d...@cr.yp.to> wrote:

> * Taking a smaller modulus with decryption failures. This certainly
> isn't a new idea: on the contrary, the original NTRU paper focused
> on this, as did most followups. It's only within the last several
> years that the systems-security perspective of "Stay away from this
> hard-to-analyze mess" has been influencing some designs. Advocating
> a return to this aspect of the original NTRU papers doesn't at all
> fit the "new ideas" narrative.

It is not a hard-to-analyze mess. Computing the decryption error can
be done via a simple script and we have theorems stating how the
decryption error effects the tightness of security reductions. Yes, it
adds a little more complexity, and mistakes can be made by
individuals. But if the cryptographic community, as a whole, is unable
to verify the correctness of a script in 5 years, then perhaps it
should not be tasked with designing schemes responsible for global
security. There is some novelty in the theorems, but there is also
novelty as to why we may want to keep the modulus as small as possible
specifically for NTRU -- and this is inconsistent with 0 decryption
errors. Yes, the paper https://eprint.iacr.org/2021/999 explains a
limitation of one of the line of attacks, but these are relatively new
attacks, so there could be more there. I am very surprised by how much
you're bothered by all lattice algorithm advances *except* the line of
attacks that reduced NTRU's security from 2^sqrt(n) to poly(n) for
large moduli. Would we hear more about the latter from you if they
also applied to Kyber? Maybe we should open up this last question to
the audience :).

> * Switching to an NTT-friendly ring. The bulk of the performance
> benefit claimed at the beginning of this thread is from faster
> divisions in NTT-friendly rings; the division algorithm, in turn,
> is ancient and well known, although avoiding the traditional name
> "deconvolution" can make it unnecessarily hard for readers to find
> the literature. The NTRU Prime paper presents "NTRU NTT" as one of
> the top-level options, and avoids it for explicit security reasons.
>
> How exactly is skipping these two modifications supposed to be "a very
> disappointing outcome to the whole effort, and to the huge research
> efforts that have been made in this area in the last 25 years"?

NTT is of course not new. The new part is that we now know just how
efficient NTT is -- lattice schemes using NTT can be 10 - 20X faster
than ECDH. And as importantly, we have a line of research that states
that avoiding a modulus q that supports NTT is (at least
asymptotically) unnecessary -- the worst-case to average-case
reductions don't care about the ring modulus (assuming that worst-case
lattice problems are actually hard). This was quite surprising
because the original attacks of Gentry
(https://link.springer.com/content/pdf/10.1007%2F3-540-44987-6_12.pdf)
against a version of NTRU crucially used the fact that having X^N-1
factor into two polynomials over the integers with small norms and
significantly smaller degrees led to better attacks. And the same
thing would hold if some polynomial factored in this way modulo q ...
and yet the reductions said that it doesn't matter. So maybe one can
choose a strategic q and then be able to solve LWE/SIS much faster, so
the security reductions were actually algorithms? It turns out,
however, that X^N+1 (or any other nice cyclotomic f(X)) cannot have
factors of small degree that also have small norm -- e.g.any factor of
X^N+1 of degree N/2 must have l_2 norm at least sqrt(q). So there is
no reason to check the exponential number of possibilities, as you
mentioned several times before.

Anyway, the above is my opinion, and it's perfectly fine if you have a
different one. The point is that the community was not presented with
much choice for NTRU-based schemes and that you happen to like the few
schemes that were presented is not relevant. The goal of our email was
to bring attention to this fact and to give examples as to what was
missing.

Vadim

Vadim
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20211011063233.155209.qmail%40cr.yp.to.

D. J. Bernstein

unread,
Oct 11, 2021, 10:13:10 AM10/11/21
to pqc-...@list.nist.gov
Vadim Lyubashevsky writes:
> NTT is of course not new. The new part is that we now know just how
> efficient NTT is

I see the developments as much more gradual than this. Anyway, even if
we imagine these specific speedups as being completely new, I don't see
how this adds up to saying that rejecting NTT-friendly rings (and
rejecting decryption failures) would be "a very disappointing outcome to
the whole effort, and to the huge research efforts that have been made
in this area in the last 25 years".

> lattice schemes using NTT can be 10 - 20X faster than ECDH

The costs of lattice-based cryptography are pretty much always dominated
by communication. This gives us the flexibility to invest CPU cycles in
extra defenses with very little impact on total cost.

Regarding the ECDH comparison, even if communication costs (which of
course massively favor ECDH, say 32 bytes vs. 1KB) are ignored, this
"10 - 20X faster" understates current ECDH software speeds on the same
platform (from, e.g., gls254) and over-extrapolates from that platform
to speed generally (Koblitz curves allow super-efficient ASICs).

> the worst-case to average-case reductions don't care about the ring
> modulus (assuming that worst-case lattice problems are actually hard)

I see that these worst-case-to-average-case reductions play an important
role in your message, which isn't a surprise given the role that they
play in the literature, but unfortunately commenting properly on this
requires more time than I can allocate at this instant.

> It turns out,
> however, that X^N+1 (or any other nice cyclotomic f(X)) cannot have
> factors of small degree that also have small norm -- e.g.any factor of
> X^N+1 of degree N/2 must have l_2 norm at least sqrt(q). So there is
> no reason to check the exponential number of possibilities, as you
> mentioned several times before.

Hmmm. There are easy theorems like this regarding the initial attack
presented in https://eprint.iacr.org/2014/784, but _not_ for the more
general attacks presented in followup papers. See generally my message
dated 24 Nov 2019 22:17:34 -0000 and preceding messages in that thread.

> The point is that the community was not presented with much choice for
> NTRU-based schemes

The actual objections here to the NTRU submission boil down to asking
for (1) decryption failures as in the original NTRU paper and (2) an
NTT-friendly ring. The NTRU team could just as easily object that Kyber
doesn't provide (1) intermediate sizes, (2) zero decryption failures,
(3) a way to avoid the provable-security loss of FO derandomization,
etc.

Is it helpful to have these technical objections presented as
unquantified characterizations of the amount of "choice" provided by
selected groupings of the submissions? The technical objections are easy
to rationally discuss, whereas the characterizations aren't.

I've already proposed that NIST set a rule of clarity for all NISTPQC
input, and I've pointed out that this rule seems to be forced by the
call for submissions.

> I am very surprised by how much you're bothered by all lattice
> algorithm advances *except* the line of attacks that reduced NTRU's
> security from 2^sqrt(n) to poly(n) for large moduli.

The above characterization is out of whack with what I've written, and
the gratuitous personalization is not helpful for the discussion.

Regarding the content, it's important to clearly distinguish analysis of
_known_ attacks from analysis of the possibility of _better_ attacks:

* Regarding _known_ attacks, there's a recent paper that claims to
pinpoint the cutoff for "large" in the NTRU context (whereas
quantification for the analogous Ring-LWE attack presented in
https://cr.yp.to/talks.html#2020.02.19 remains an open problem).

The message that began this thread describes the attacks as if they
didn't have a cutoff ("the cost of solving it decreases much faster
when the working modulus q increases"), and omits mention of
literature to the contrary.

* Regarding the possibility of _better_ attacks, of course one has to
worry about new ideas reducing the "large" cutoff in both contexts,
and about new ideas for exploiting homogeneous structure, and about
new ideas for exploiting the extra samples from Ring-LWE, and about
new ideas for exploiting FO derandomization, etc.

The message that began this thread describes only anti-NTRU
possibilities of better attacks, omitting mention of many other
possibilities described in the literature. I don't see how this
selection can be scientifically justified.

---Dan
signature.asc

Vadim Lyubashevsky

unread,
Oct 11, 2021, 10:39:25 AM10/11/21
to pqc-forum
On Mon, Oct 11, 2021 at 4:13 PM D. J. Bernstein <d...@cr.yp.to> wrote:
>
> I don't see
> how this adds up to saying that rejecting NTT-friendly rings (and
> rejecting decryption failures) would be "a very disappointing outcome to
> the whole effort, and to the huge research efforts that have been made
> in this area in the last 25 years".

Not being given the choice to consider these things is the
disappointing outcome. Suppose NIST declared after 1 week in 2017
that they're actually not going to proceed with 4 years of this boring
"non-competition" and will instead throw a coin to decide the
standard. If Kyber turned out to be the winner, I would have felt
that the correct scheme was "chosen". But I would see how others may
view the process that led to the outcome as research efforts being
wasted. This is what's happening here on the NTRU side -- there is a
chance that NTRU-HRSS could end up getting standardized by default.

In the rest of this email, you seem to just be arguing NTRU vs. Kyber,
which is, again, not the point of what we wrote in the email, so I
will stop here.

Vadim

Phillip Hallam-Baker

unread,
Oct 11, 2021, 3:55:23 PM10/11/21
to pqc-forum
Without wanting to discuss the specifics of this particular situation, it is my experience that whether an article actually infringes is of the utmost unimportance in standards work.

I have been an expert witness in various patent proceedings and observed countless others. At this point, well over a billion dollars has been paid either in damages or to avoid ridiculous damages over work that I (and others) had established clear prior art on.

Given that the number of people who can understand this particular set of math is unlikely to be more than 50, even finding an expert consultant is likely to be a major challenge.

Short of someone suing to obtain a declaratory judgement on non-infringement, I can't see anyone putting this technology into product. The risk of someone building a post quantum computer is substantially less than the risk of a $500 million judgement.

Christopher J Peikert

unread,
Oct 11, 2021, 4:13:37 PM10/11/21
to Phillip Hallam-Baker, pqc-forum
Dear Phillip, thanks for your comments, and I appreciate the "utmost unimportance" remark. But let me address one point:

On Mon, Oct 11, 2021 at 3:55 PM Phillip Hallam-Baker <ph...@hallambaker.com> wrote:
Given that the number of people who can understand this particular set of math is unlikely to be more than 50, even finding an expert consultant is likely to be a major challenge.

On the contrary, there are almost certainly already hundreds (if not thousands) of people who can/do understand the relevant math, and this number will continue to grow. Almost anyone who can understand elementary polynomial and matrix arithmetic, which are routinely taught to undergraduates around the world, can handle it. I doubt there will be any shortage of experts.

daniel.apon

unread,
Oct 11, 2021, 7:45:31 PM10/11/21
to pqc-forum, cpei...@alum.mit.edu, pqc-forum, hal...@gmail.com
Dear Phillip and all,

I tend to agree with Chris (so far, of course). The level of expertise required in this scenario is fairly minimal. Admittedly, it takes some handful of perhaps-tedious hours to read the material unique to this issue (such as the patent itself), but the issue at question (still, to me) seems to require a relatively low level of mathematical prowess.

An interested and motivated PhD student in cryptography, or computer science, or computer engineering, or mathematics, etc. -- who has completed their first couple of years of graduate school, who has completed their general coursework, who has written a paper in this area (or maybe even not), who is willing to spend some time over the course of a couple weeks (with someone they could ask clarifying questions to), and who is willing to devote the time to carefully read the patent claim line by line, should be able to understand the issues involved and form their own opinion. At most, perhaps they would need one or two 1-hour tutorial(s) on basic lattice-style cryptography. Perhaps it would help if they had a course along the way on "generic" cryptography (definitions of IND-CPA, IND-CCA, etc.).

However -- up to the time and effort requirement to (seriously) read the relevant technical material -- almost anyone at at least this point in their career that I've described should be capable of understanding the issues here.

There is nothing particularly "hidden" here, which is why the issue remains surprising to me in its current state.

daniel.apon

unread,
Oct 11, 2021, 8:10:57 PM10/11/21
to pqc-forum, daniel.apon, cpei...@alum.mit.edu, pqc-forum, hal...@gmail.com
https://imgur.com/a/tRPb6i6

(For future historians: One wonders if this is the first instance of a NIST employee posting a meme on an internet message board)

Phillip Hallam-Baker

unread,
Oct 12, 2021, 12:43:53 PM10/12/21
to daniel.apon, pqc-forum, cpei...@alum.mit.edu
Memes are really not a professional or suitable way to discuss such issues. They are a means of demeaning the content of an argument without actually engaging on the specifics.


The notion that a suit in this area would be simple is a nice theory but one which in my experience falls rather short of the reality of the US patent system.

The test is not finding someone who thinks they understand it themselves, it is finding someone who can explain it to a judge who is not a mathematician. I charge $500/hr, the lawyers charge the same. These cases cost millions of dollars to defend and the costs are not recoverable.

I spent a very long time proving that a purported patent for WebMail was invalid. That should have been a slam dunk because I used WebMail as one of the test cases for HTTP and found the POST method simply did not work. That is where Content-Length came from. It is hard to imagine better prior art but the case still settled.

It is going to take an expired patent, a declaratory judgement or an explicit license grant to provide sufficient certainty to ship product if there are any questions raised. 

In game theoretic terms, the advantage lies with the plaintiff in both an infringement suit and a declaratory judgement suit. In an infringement suit, the defendant has to spend a considerable sum of money to avoid loss of an even more considerable sum. In a declaratory judgement suit, the rights holder has to spend a considerable sum knowing that if they win the case, the plaintiff will simply adopt a different, non-infringing algorithm.



--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Vadim Lyubashevsky

unread,
Oct 12, 2021, 12:55:36 PM10/12/21
to Phillip Hallam-Baker, daniel.apon, pqc-forum, cpei...@alum.mit.edu
Thank you for this information, Phillip.

> In game theoretic terms, the advantage lies with the plaintiff in both an infringement suit and a declaratory judgement suit. In an infringement suit, the defendant has to spend a considerable sum of money to avoid loss of an even more considerable sum. In a declaratory judgement suit, the rights holder has to spend a considerable sum knowing that if they win the case, the plaintiff will simply adopt a different, non-infringing algorithm.
>

This makes sense. Can I then ask NIST as to why a declaratory
judgement suit is not the default approach for dealing with patent
claims? How much time do
these suits usually take?



Vadim


>
>
> On Mon, Oct 11, 2021 at 7:45 PM 'daniel.apon' via pqc-forum <pqc-...@list.nist.gov> wrote:
>>
>> Dear Phillip and all,
>>
>> I tend to agree with Chris (so far, of course). The level of expertise required in this scenario is fairly minimal. Admittedly, it takes some handful of perhaps-tedious hours to read the material unique to this issue (such as the patent itself), but the issue at question (still, to me) seems to require a relatively low level of mathematical prowess.
>>
>> An interested and motivated PhD student in cryptography, or computer science, or computer engineering, or mathematics, etc. -- who has completed their first couple of years of graduate school, who has completed their general coursework, who has written a paper in this area (or maybe even not), who is willing to spend some time over the course of a couple weeks (with someone they could ask clarifying questions to), and who is willing to devote the time to carefully read the patent claim line by line, should be able to understand the issues involved and form their own opinion. At most, perhaps they would need one or two 1-hour tutorial(s) on basic lattice-style cryptography. Perhaps it would help if they had a course along the way on "generic" cryptography (definitions of IND-CPA, IND-CCA, etc.).
>>
>> However -- up to the time and effort requirement to (seriously) read the relevant technical material -- almost anyone at at least this point in their career that I've described should be capable of understanding the issues here.
>>
>> There is nothing particularly "hidden" here, which is why the issue remains surprising to me in its current state.
>> On Monday, October 11, 2021 at 4:13:37 PM UTC-4 cpei...@alum.mit.edu wrote:
>>>
>>> Dear Phillip, thanks for your comments, and I appreciate the "utmost unimportance" remark. But let me address one point:
>>>
>>> On Mon, Oct 11, 2021 at 3:55 PM Phillip Hallam-Baker <ph...@hallambaker.com> wrote:
>>>>
>>>> Given that the number of people who can understand this particular set of math is unlikely to be more than 50, even finding an expert consultant is likely to be a major challenge.
>>>
>>>
>>> On the contrary, there are almost certainly already hundreds (if not thousands) of people who can/do understand the relevant math, and this number will continue to grow. Almost anyone who can understand elementary polynomial and matrix arithmetic, which are routinely taught to undergraduates around the world, can handle it. I doubt there will be any shortage of experts.
>>>
>>> Sincerely yours in cryptography,
>>> Chris
>>
>> --
>> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
>> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/06616089-9760-4e8f-a1e7-8981cbbb5f8dn%40list.nist.gov.
>
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAMm%2BLwioGMUzqcQWapmxvJBU6F%3DvHLR%3DCGLUCFb3Bh5FQOBzXw%40mail.gmail.com.

Apon, Daniel C. (Fed)

unread,
Oct 12, 2021, 2:31:38 PM10/12/21
to Phillip Hallam-Baker, pqc-forum, cpei...@alum.mit.edu
Hi Phillip,

I complete agree with you. (I didn't intend to demean to the content of the argument without actually engaging on the specifics -- I apologize for how it came across.)

A scientific opinion is not the same as a legal proceeding. I think most people who are familiar with these issues will concur with you that this is the key point:
"It is going to take an expired patent, a declaratory judgement or an explicit license grant to provide sufficient certainty to ship product if there are any questions raised."

I appreciate the clarity you've added to the discussion thread.
--Daniel


From: Phillip Hallam-Baker <ph...@hallambaker.com>
Sent: Tuesday, October 12, 2021 12:43 PM
To: Apon, Daniel C. (Fed) <danie...@nist.gov>
Cc: pqc-forum <pqc-...@list.nist.gov>; cpei...@alum.mit.edu <cpei...@alum.mit.edu>
Subject: Re: [pqc-forum] Non-applicability of the CNRS patent to Kyber and Saber, and next steps
 

Blumenthal, Uri - 0553 - MITLL

unread,
Oct 12, 2021, 2:41:50 PM10/12/21
to pqc-forum
>> In game theoretic terms, the advantage lies with the plaintiff in both
>> an infringement suit and a declaratory judgement suit . . . . .
>> In a declaratory judgement suit,
>> the rights holder has to spend a considerable sum knowing that if they win
>> the case, the plaintiff will simply adopt a different, non-infringing algorithm.
>
>
> This makes sense. Can I then ask NIST as to why a declaratory
> judgement suit is not the default approach for dealing with
> patent claims? How much time do these suits usually take?

Let me second this question.

Along the same lines, since this seems to be a concern not just for NIST, perhaps other federal government departments (e.g., Department of Treasury?) could join NIST in filing such a declaratory judgment suit? To get this issue addressed once and for all?

Also, how much ability is the rights holder likely to have to drag such a case?




> On Mon, Oct 11, 2021 at 7:45 PM 'daniel.apon' via pqc-forum <pqc-...@list.nist.gov> wrote:
>>
>> Dear Phillip and all,
>>
>> I tend to agree with Chris (so far, of course). The level of expertise required in this scenario is fairly minimal. Admittedly, it takes some handful of perhaps-tedious hours to read the material unique to this issue (such as the patent itself), but the issue at question (still, to me) seems to require a relatively low level of mathematical prowess.
>>
>> An interested and motivated PhD student in cryptography, or computer science, or computer engineering, or mathematics, etc. -- who has completed their first couple of years of graduate school, who has completed their general coursework, who has written a paper in this area (or maybe even not), who is willing to spend some time over the course of a couple weeks (with someone they could ask clarifying questions to), and who is willing to devote the time to carefully read the patent claim line by line, should be able to understand the issues involved and form their own opinion. At most, perhaps they would need one or two 1-hour tutorial(s) on basic lattice-style cryptography. Perhaps it would help if they had a course along the way on "generic" cryptography (definitions of IND-CPA, IND-CCA, etc.).
>>
>> However -- up to the time and effort requirement to (seriously) read the relevant technical material -- almost anyone at at least this point in their career that I've described should be capable of understanding the issues here.
>>
>> There is nothing particularly "hidden" here, which is why the issue remains surprising to me in its current state.
>> On Monday, October 11, 2021 at 4:13:37 PM UTC-4 cpei...@alum.mit.edu wrote:
>>>
>>> Dear Phillip, thanks for your comments, and I appreciate the "utmost unimportance" remark. But let me address one point:
>>>
>>> On Mon, Oct 11, 2021 at 3:55 PM Phillip Hallam-Baker <ph...@hallambaker.com> wrote:
>>>>
>>>> Given that the number of people who can understand this particular set of math is unlikely to be more than 50, even finding an expert consultant is likely to be a major challenge.
>>>
>>>
>>> On the contrary, there are almost certainly already hundreds (if not thousands) of people who can/do understand the relevant math, and this number will continue to grow. Almost anyone who can understand elementary polynomial and matrix arithmetic, which are routinely taught to undergraduates around the world, can handle it. I doubt there will be any shortage of experts.
>>>
>>> Sincerely yours in cryptography,
>>> Chris
>>
>> --
>> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
>> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/06616089-9760-4e8f-a1e7-8981cbbb5f8dn%40list.nist.gov.
>
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAMm%2BLwioGMUzqcQWapmxvJBU6F%3DvHLR%3DCGLUCFb3Bh5FQOBzXw%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAE158z-VzUE1R-KTxFctKik47wnwsUfv4%3DQt2Y5UYsen02FTRQ%40mail.gmail.com.

Robinson, Angela Y. (Fed)

unread,
Oct 12, 2021, 2:57:33 PM10/12/21
to Watson Ladd, Vadim Lyubashevsky, pqc-forum
Dear Vadim,

Thank you for the time and effort you and Damien put into this document to both itemize and summarize points about the CNRS patent. Among the multiple active threads on the PQC-forum, I wanted to circle back and ask about your point below. I am wondering if your comment:
"If CNRS wanted to clear the entire dispute up, they could do so by licensing the patent on FRAND terms. That they haven't speaks volumes."

means that you don't agree that what CNRS is calling "FRAND terms" should not be classified as FRAND terms, or if you mean something else. I only ask because your document concludes with a print of the CNRS statement that asserts FRAND terms will be granted in the event that (sparing all of the tedious details) a NIST Standard includes proposals covered by CNRS Patents. Would you mind elaborating a bit on your point?


Kind regards,
Angela (speaking for myself & reattaching for those who may have gotten lost in the forum lately)


-----Original Message-----
From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> On Behalf Of Watson Ladd
Sent: Sunday, October 10, 2021 3:20 PM
To: Vadim Lyubashevsky <vadim....@gmail.com>
Cc: pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [pqc-forum] Non-applicability of the CNRS patent to Kyber and Saber, and next steps

On Sun, Oct 10, 2021 at 11:57 AM Vadim Lyubashevsky <vadim....@gmail.com> wrote:
>
> Dear all,
>
> > On Sun, Oct 10, 2021 at 4:33 PM D. J. Bernstein <d...@cr.yp.to> wrote:
> >
> > Vadim Lyubashevsky writes:
> > > We did not say "don't standardize NTRU" -- we said "don't
> > > standardize NTRU as is".
> >
<snip>
> Based on what was in the litigation proceedings, the CNRS promise that
> the patent does not extend to Kyber/Saber is scientifically
> indisputable. Whether that's enough for a court, I don't know -- but
> that's outside the scope of "scientific" and is left for lawyers to
> judge (it would be really nice to hear some opinions about this from
> actual lawyers on the list, by the way).

If CNRS wanted to clear the entire dispute up, they could do so by licensing the patent on FRAND terms. That they haven't speaks volumes.

> Maybe a court can say that
> 3x3 matrix rings are commutative because, hey, close enough. If the
> Indiana legislature came close to defining pi as 3.2,
> (https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.
> wikipedia.org%2Fwiki%2FIndiana_Pi_Bill&amp;data=04%7C01%7Cangela.robin
> son%40nist.gov%7C565f6ceb903941f3d05108d98c231056%7C2ab5d82fd8fa4797a9
> 3e054655c61dec%7C1%7C0%7C637694904496527121%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=9O3ec3z4eowHQ0k2uTwGBdpQa09epEqbZzwUPFKF5ik%3D&amp;reserved=0), then maybe anything is indeed possible. But these are not elements that need to be discussed in a scientific document whose purpose is scientific objectivity.
>
> Vadim
>
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAE158z97BwYt3qgV7wJYtFXurLkGBQQnA-9cntXKZbYOC%3DL%2Bjg%40mail.gmail.com.



--
Astra mortemque praestare gradatim

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CACsn0cka9BLKviS3j7bhUwfJy8vOreyA4oscaNBvQNKxX%3Dw3-Q%40mail.gmail.com.
non-app (002).pdf

Vadim Lyubashevsky

unread,
Oct 12, 2021, 3:01:28 PM10/12/21
to Robinson, Angela Y. (Fed), Watson Ladd, pqc-forum
Hi Angela,

On Tue, Oct 12, 2021 at 8:57 PM 'Robinson, Angela Y. (Fed)' via
pqc-forum <pqc-...@list.nist.gov> wrote:
>
> Dear Vadim,
>
> Thank you for the time and effort you and Damien put into this document to both itemize and summarize points about the CNRS patent. Among the multiple active threads on the PQC-forum, I wanted to circle back and ask about your point below. I am wondering if your comment:
> "If CNRS wanted to clear the entire dispute up, they could do so by licensing the patent on FRAND terms. That they haven't speaks volumes."

This was actually not my comment, but that of Watson Ladd. Perhaps he
can elaborate.

Best,
Vadim
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/BLAPR09MB6513A9F5D18E651286AF546D8EB69%40BLAPR09MB6513.namprd09.prod.outlook.com.

Robinson, Angela Y. (Fed)

unread,
Oct 12, 2021, 3:03:42 PM10/12/21
to Vadim Lyubashevsky, Watson Ladd, pqc-forum
Ah, sorry about that! Then it was likely a typo or something of that nature.

Thanks for your quick reply,
Angela

Blumenthal, Uri - 0553 - MITLL

unread,
Oct 12, 2021, 3:12:36 PM10/12/21
to Robinson, Angela Y. (Fed), pqc-forum
I'm neither Watson nor Vadim ;-), but here's my concern and my $0.02: a NIST standard cannot be "owned" by an organization that may charge money merely for implementing or using that standard.

IMHO, NIST needs to unambiguously clear the status of the threatening patents, probably via filing a declaratory judgment suit.

Leaving it at "They (CNRS?) promised to license it to anybody on a non-discriminatory basis" is *not good enough* (IMHO).
--
Regards,
Uri

There are two ways to design a system. One is to make is so simple there are obviously no deficiencies.
The other is to make it so complex there are no obvious deficiencies.
- C. A. R. Hoare
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/BLAPR09MB6513A9F5D18E651286AF546D8EB69%40BLAPR09MB6513.namprd09.prod.outlook.com.

D. J. Bernstein

unread,
Oct 12, 2021, 11:44:36 PM10/12/21
to pqc-...@list.nist.gov
Vadim Lyubashevsky writes:
> How much time do these suits usually take?

Keltie's opposition was filed with the EPO in January 2017, asking for
this particular patent to be declared invalid (a simpler process than
asking a court for a declaratory judgment of non-infringement). The
opposition was rejected slightly under two years later. The appeal
process has been slower. See

https://register.epo.org/application?number=EP11712927&lng=en&tab=doclist

for the full document list, including some interesting new documents.

For another example of a declaratory-judgment schedule, this time in the
US and going directly to court, see https://cr.yp.to/export/docs.html. I
filed suit in February 1995. The main government argument was rejected
by the court in April 1996, and the government lost the case in December
1996. The government's evasion effort, shifting its unconstitutional
regulatory scheme to the Department of Commerce, lost in August 1997.
The government's appeal lost in May 1999. The government abandoned the
most important regulations shortly after that.

Obviously people have very different ideas regarding how proactive to be
regarding risk management in general, and regarding patent risks in
particular. In May 2018, four days after NIST's posting of the patent
statements required by the call for submissions, I tweeted the
following:

Alert: There's a patent on noisy DH+reconciliation with priority date
2010.02.18: https://patents.google.com/patent/EP2537284B1/en.
Preliminary assessment is that this covers many lattice-based and
some code-based #NISTPQC submissions. Even worse mess than the Ding
patent.

https://twitter.com/hashbreaker/status/995111649982939136

That was well over three years ago, and as far as I know was the first
effort to alert the public. Recall that, as part of the same posting,
NIST had gone out of its way to discourage public patent analysis:

For the 1^st round, all submissions should be evaluated and analyzed
on their technical merits.

Information eventually dribbled out regarding NIST's overconfident
secret-negotiation approach to handling patents. I expressed concern
regarding the approach. See my pqc-forum email dated 1 Jun 2019 18:08:58
-0000, after ISARA (which took Certicom's strategy of shotgun ECC
patents and adapted it to post-quantum crypto) claimed that it "will be
working together with NIST to provide a royalty-free grant to all
schemes in the NIST competition":

This sounds great if it actually happens. However, I'm concerned
about the following scenario:

* The _hope_ of free use of the patents leads the patents to be given
lower weight in selections than they would normally be given.

* Negotiations between NIST and ISARA drag on, and eventually it
turns out that NIST can't afford ISARA's buyout price.

* The selections thus end up more tilted towards ISARA's patents than
they otherwise would have been.

* Users ask, quite reasonably, why patents weren't assigned a higher
weight in the decision-making process.

Is there a more specific timeframe for "will be working together"?

NIST's overconfident approach is again visible in, e.g., NIST's December
2020 private email saying "We are working to clear up the IP situation,
but it is a slow process", with no acknowledgment of the possibility of
failure. The details of ISARA's eventual patent grant

https://www.isara.com/nist-grant.html

are much more limited than one might hope---poison pills; exclusion of
implementation patents; not even close to "all schemes in the NIST
competition"---and it seems that NIST's approach has failed completely
regarding the Gaborit--Aguilar Melchor patent and the Ding patent.

NIST will of course claim that it can't afford to allocate the many
hours necessary to fight court battles regarding two patents, never mind
all of the other patents of interest. This sort of thought process is
understandable for a company trying to maximize its income---whoops,
stepped on a landmine, now the patent holder is asking us for $300000, a
court battle would cost more than this, better pay up---but government
agencies are supposed to be driven by the public interest. NIST could
already have said "Yikes!" in May 2018 and asked the public for help in
organizing resources to handle the patent issues most effectively.

---Dan
signature.asc

Christopher J Peikert

unread,
Oct 13, 2021, 9:50:15 AM10/13/21
to Phillip Hallam-Baker, daniel.apon, pqc-forum
On Tue, Oct 12, 2021 at 12:43 PM Phillip Hallam-Baker <ph...@hallambaker.com> wrote:
I spent a very long time proving that a purported patent for WebMail was invalid. That should have been a slam dunk because I used WebMail as one of the test cases for HTTP and found the POST method simply did not work. That is where Content-Length came from. It is hard to imagine better prior art but the case still settled.

I have no doubt about any of this, but want to point out that the present discussion is about whether Kyber/SABER infringe on a specific patent, and not the patent's validity.

As I understand it, at least in the US, proving infringement is a significantly higher bar than showing validity: the patent holder bears the burden of proof to show infringement, via a preponderance of the evidence. Whereas for validity, the patent is presumed valid and the burden is on the party claiming otherwise, by clear and convincing evidence.

(As with everything in law, there are exceptions and corner cases, but these are the broad principles.)
 
It is going to take an expired patent, a declaratory judgement or an explicit license grant to provide sufficient certainty to ship product if there are any questions raised.

Given the above, I wonder whether this is really true in this case. Regarding infringement, the following facts are abundantly clear:

1. The patent holder has explicitly said, in its own arguments, that the ring R (to which the elements X_A, S_A, etc. belong) must be commutative. And this figured into the board's decision to uphold the patent (for now).

CNRS, translated from the French: "This supposes in particular that the used ring [R, named earlier] is commutative... Those skilled in the art would therefore have recognized that, even if the commutativity is not explicitly specified, it is an implicit characteristic of Claim 1... The owner indicates that in no case would the patent as granted prevent the protection of a development based on non-commutative rings." (See the document attached to the original post in this thread [link].)

2. In Kyber/SABER, the ring R (to which the elements corresponding to X_A, S_A, etc. would belong) is non-commutative.

3. Moreover, in Kyber/SABER's rings R, there is no suitable "internal composition law f" having the properties the patent requires, and they use fundamentally different equations from the ones required by the patent.

It's hard for me to imagine how a sensible process can see all this and still conclude a preponderance of the evidence in favor of infringement. Perhaps the process is not sensible.

But if the process is really so insensible as to advantage a patent holder in these circumstances, then PQC as a whole may be at much greater risk than we think.

We are discussing only the patents we know about, but there may be other unknown patents hiding underwater. If a patent owner who has explicitly said (in effect) "Kyber/SABER don't infringe on this" still has a noticeable chance of proving infringement, then there's no end to the kind of wild assertions of infringement that might emerge from other patents, against all kinds of NIST PQC proposals.

I don't have a suggestion about what to do with such "unknown unknowns," but the assessment of the known risks should be based on the known facts (like those above).

D. J. Bernstein

unread,
Oct 13, 2021, 11:33:01 AM10/13/21
to pqc-...@list.nist.gov
> We are discussing only the patents we know about, but there may be
> other unknown patents hiding underwater.

Examples of previous pqc-forum notes regarding exactly this topic:

If a candidate publishes its last tweak in month M then it's
inherently immune to any patent application filed after month M, and
thus to any patent application published after month M+18. There's
also serious time and effort to review the patent applications before
month M+18 and figure out what they cover---this requires reading
many obfuscated pages in patent files, understanding the technical
details, and understanding the rules for how patents are interpreted,
but on the bright side this work can start long before month M+18,
since it's not as if all the patent applications are suddenly
appearing in the last month. ...

I agree that there are many mindless new patents that could pose
dangers. If as a community we don't make serious efforts to track and
avoid these dangers, and big companies end up being sued and having to
do _another_ round of painful multi-year crypto upgrades, then the users
will justifiably blame us for being so reckless.

It would be nice to see the discussion moving forward, building on
previous observations rather than ignoring them. It would also be nice
to see NIST releasing its patent analyses to the public and helping
organize public efforts to manage the risks; better late than never.

> As I understand it, at least in the US, proving infringement is a significantly
> higher bar than showing validity: the patent holder bears the burden of proof
> to show infringement, via a preponderance of the evidence.

"Burden of proof" is merely stating who loses when _no_ evidence is
provided: it's up to the patent holder to provide evidence for each
element of infringement. To understand how courts evaluate the evidence,
one has to start with the definition of infringement and the procedures
used in patent cases, including Markman hearings (how the words in the
patent end up being defined) and the doctrine of equivalents (allowing
something to infringe _without_ meeting the definitions of the words).
The rules of the game are _really_ important.

"Preponderance" is the default rule in civil proceedings: when there's a
dispute about the facts, whichever side has 51% of the evidence
(weighted by credibility etc.) wins. Example of previous pqc-forum
comment that made the balance much more clear for readers than the
current comment: "Courts do _not_ presume non-infringement; whichever
side has the preponderance of evidence regarding infringement wins."

It's correct that validity is an exception to the default rule:
basically, courts presume that the patent office has done its job. This
is one of many ways that the system is tilted towards patent holders,
and one of many flaws in a previous declaration on pqc-forum (still not
withdrawn) that a court is "likely" to "accept" an invalidity argument.

---Dan
signature.asc

daniel.apon

unread,
Oct 13, 2021, 7:42:10 PM10/13/21
to pqc-forum, D. J. Bernstein, pqc-...@list.nist.gov
Speaking for myself,

I believe essentially everyone agrees that it would be a abject disaster for global PQC standards, and a fundamental disservice to the public interest, if NIST is forced to choose its near-term PQC standards based on a meritless patent claim, rather than basing its standardization decisions on the true technical merits on a candidate-by-candidate basis, especially given the decades of public research put in, plus the recent years of concentrated concrete analysis that thousands+ of people have put in.

If such occurs, we can all choose how to assign blame later.

Greg Maxwell

unread,
Oct 13, 2021, 8:28:40 PM10/13/21
to daniel.apon, D. J. Bernstein, pqc-...@list.nist.gov
On Wed, Oct 13, 2021 at 11:42 PM 'daniel.apon' via pqc-forum
<pqc-...@list.nist.gov> wrote:
> I believe essentially everyone agrees that it would be a abject disaster for global PQC standards, and a fundamental disservice to the public interest,
> if NIST is forced to choose its near-term PQC standards based on a meritless patent claim, rather than basing its standardization decisions on the
> true technical merits on a candidate-by-candidate basis, especially given the decades of public research put in, plus the recent years of concentrated > concrete analysis that thousands+ of people have put in.

Unless I've misunderstood you, I couldn't disagree more strongly. All
of the survivors and secondaries out of the latest round are believed
per the best available science to be essentially good, viable,
proposals. If not they would have been dropped.

Plenty of technology goes underused or completely unused due to patent
allegations which are ultimately overblown or meritless (as was
substantially the case for ECC for a decade, see RFC6090-- but it's
been even worse for other technology). The simple reason is that if
you are sued for infringement and _win_ you still end up losing an
enormous amount of money.

Cryptography in particular has a long history of total non-deployment
in the face of patent encumbrance, even when the patent claims were
dubious. In the case of PQC the risks being defended against are more
speculative than they are for traditional cryptography. The deploying
party takes the patent liability risk but it's the end user which
takes the long term QC risk to their confidentiality.

The abject disaster would be NIST unnecessarily selecting a solution
which parties will decline to deploy because of a credible threat of
patent litigation, while viable alternatives exist without the concern
but which are not deployed because they're not "the standard".

We live in a world of trade-offs but one of the most important
criteria is that a scheme be deployable by industry in practice, and
that requires that the value that the scheme provides (in resistance
to speculative attack) be greater than the costs (including
computational, bandwidth, and legal liability). Legal exposure
concerns are not of much relevance for an academic paper, but they are
highly relevant for something to be deployed for actual use in
industry-- much more relevant to many than the communications/cpu
performance differences among any of the finalists. If the result
isn't deployable then those years of analysis will ultimately not
contribute to many people's privacy.

daniel.apon

unread,
Oct 13, 2021, 9:19:58 PM10/13/21
to pqc-forum, gmax...@gmail.com, D. J. Bernstein, pqc-...@list.nist.gov, daniel.apon
Hi Greg/all,
(again speaking for myself)

I certainly expect that NIST will select a solution that avoids threats of patent litigation -- or more generally, will be deployable by industry in practice immediately.
Given the importance of the PQC transition to mitigate quantum threats, ensuring early adoption of the first PQC standards is Job #1 at the current stage.

However,

"All of the survivors and secondaries out of the latest round are believed
per the best available science to be essentially good, viable,
proposals. If not they would have been dropped."
is not entirely accurate. Rather, it's not the full story.

Certainly, the world can get by with any of the schemes moved into the third round, but they are certainly not all equal. (The large body of research literature, particularly in the past few years, gives a large distinction in fundamental quality between the various schemes.)

D. J. Bernstein

unread,
Oct 13, 2021, 11:06:46 PM10/13/21
to pqc-...@list.nist.gov
NIST employee writes:
> the recent years of concentrated concrete analysis that thousands+ of
> people have put in

Evidence? Also, how big is "thousands+" compared to "thousands"?

There were hundreds of submitters. There are NISTPQC papers from
non-submitters, but there are also non-NISTPQC papers from submitters,
and there are submitters who seem to be busy working on things other
than analysis. So I'm puzzled hearing that there has been "concentrated
concrete analysis" from "thousands+ of people". This also seems hard to
reconcile with the years of NISTPQC that elapsed before some very easy
attacks were published, such as the complete break of Round2, and more
broadly with the instability of the attack picture for some submissions
(e.g., regarding lattices, see https://cr.yp.to/talks.html#2021.06.08).

If the intention here was to make a guess regarding analysis carried out
in secret by, say, the Chinese government, then this needs to be
clarified, so that people don't confuse it with public analysis.

> The large body of research literature, particularly in the past few
> years, gives a large distinction in fundamental quality between the
> various schemes.

Wow, sounds important. How about telling the public what this allegedly
"large distinction in fundamental quality" is, and full details of how a
NIST employee arrived at this characterization given the literature, so
that the public has a real chance to see and correct errors _before_
NIST uses those errors for decisions? For example, is the "thousands+"
statement above feeding into NIST's assessment of whether mathematical
structure is "well understood" (official evaluation criterion)? How
exactly is this being compared across the "various schemes"?

For comparison, the official rules say that NIST's analysis will be
"thorough" and will be performed "in a manner that is open and
transparent to the public". This seems very far from reality.

---Dan
signature.asc

Ruben Niederhagen

unread,
Oct 14, 2021, 3:56:00 AM10/14/21
to pqc-forum, pqc-...@list.nist.gov
Hi!

> On Oct 14, 2021, at 03:20, 'daniel.apon' via pqc-forum <pqc-...@list.nist.gov> wrote:
>
> However,
> "All of the survivors and secondaries out of the latest round are believed
> per the best available science to be essentially good, viable,
> proposals. If not they would have been dropped."
> is not entirely accurate. Rather, it's not the full story.
>
> Certainly, the world can get by with any of the schemes moved into the third round, but they are certainly not all equal. (The large body of research literature, particularly in the past few years, gives a large distinction in fundamental quality between the various schemes.)

This, in some way, supports Greg Maxwell‘s statement: In practice, a scheme that “the world can get by with“ and that has no or less risk of patent issues is much more usable and much more likely to be adopted than a scheme with higher “quality“ (by whatever definition of “quality“) but with a higher risk of patent issues.

So, is there a reliable way to assess patent issues - a way that we can all trust? (Something else than concerns, opinions, anecdotes, experiences..?)

Then the patent risk could be factored into the “quality” of the scheme and we might end up with a good “quality” and a low “patent risk”.

Ruben




Blumenthal, Uri - 0553 - MITLL

unread,
Oct 14, 2021, 11:29:35 AM10/14/21
to pqc-forum
At least some submitters of the Lattice-based algorithms seem to be backed by large companies that, presumably, have some interest in the outcome. At least, they seem to have the most to lose if, e.g., Kyber or Saber is selected, and the patent holder sues for infringement? Or if a less-optimal algorithm is selected, merely to avoid the blackmail?

While it is unclear whether NIST could or would file a Declaratory Judgment suite, perhaps these submitters' employers could team up and file such a suit now?

Just a thought...
--
Regards,
Uri

Vadim Lyubashevsky

unread,
Oct 15, 2021, 5:30:26 AM10/15/21
to Greg Maxwell, pqc-...@list.nist.gov
Dear all,

> The abject disaster would be NIST unnecessarily selecting a solution
> which parties will decline to deploy because of a credible threat of
> patent litigation, while viable alternatives exist without the concern
> but which are not deployed because they're not "the standard".

There is definitely tension here between the viewpoint think that the
whole thing will have been a waste of time if a scheme is chosen for
non-technical reasons, vs. those who think that everything will have
been a waste of time if people are scared to deploy the chosen scheme.
Clearly both sides have a lot of merit to the argument, and we should
strive toward a solution satisfactory to all parties.

My personal preference would be to simply let the community to design
a better NTRU than NTRU-HRSS (Fig. 10 of
https://eprint.iacr.org/2021/1352.pdf is basically NTRU from 20 years
ago done over a different ring -- we could then discuss the CCA
transform to use) -- this should not take much time. But a second,
perhaps even easier solution, would be to simply standardize more than
one, or even all three, lattice KEMs. The people who are afraid of
patents can just use NTRU-HRSS, and those who are not (or if the
patent thing definitively sorts itself out), can use Kyber/Saber.
The reason one might want to standardize all three is that there are
differences between Kyber and Saber, and so there could be other
reasons (e.g. more patents we don't know about?) that we don't know
about yet, why one would be more preferable than the other.

Best,
Vadim

>
> We live in a world of trade-offs but one of the most important
> criteria is that a scheme be deployable by industry in practice, and
> that requires that the value that the scheme provides (in resistance
> to speculative attack) be greater than the costs (including
> computational, bandwidth, and legal liability).

> Legal exposure
> concerns are not of much relevance for an academic paper, but they are
> highly relevant for something to be deployed for actual use in
> industry-- much more relevant to many than the communications/cpu
> performance differences among any of the finalists. If the result
> isn't deployable then those years of analysis will ultimately not
> contribute to many people's privacy.
>
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAAS2fgQftzXSDdrhv2QZ_HeG_Ja_ktGspk3eQawSZsSwv%3D3kLA%40mail.gmail.com.

Wrenna Robson

unread,
Oct 15, 2021, 7:17:05 AM10/15/21
to Vadim Lyubashevsky, Greg Maxwell, pqc-...@list.nist.gov
Out of interest, how many person-hours of effort would a single standardization take (and how much work is on the part of the submitters and how much is on the part of NIST?)

gard...@gmail.com

unread,
Oct 15, 2021, 10:37:15 AM10/15/21
to pqc-forum, vadim....@gmail.com, pqc-...@list.nist.gov, gmax...@gmail.com
On Friday, October 15, 2021 at 5:30:26 a.m. UTC-4 vadim....@gmail.com wrote:
But a second,
perhaps even easier solution, would be to simply standardize more than
one, or even all three, lattice KEMs. The people who are afraid of
patents can just use NTRU-HRSS, and those who are not (or if the
patent thing definitively sorts itself out), can use Kyber/Saber.
The reason one might want to standardize all three is that there are
differences between Kyber and Saber, and so there could be other
reasons (e.g. more patents we don't know about?) that we don't know
about yet, why one would be more preferable than the other.


 Speaking for myself,

This forum should focus on the technical aspect of weeding out submissions which don't provide the required assurance.  While keeping in mind that the best technical choice is not always the best choice in practice

To that end I like the idea of letting the market decide partially.  So long as we don't end up with "too many" choices in the standard. 

Cheers,
Mike

D. J. Bernstein

unread,
Oct 17, 2021, 9:01:45 AM10/17/21
to pqc-...@list.nist.gov
Vadim Lyubashevsky writes:
> But a second, perhaps even easier solution, would be to simply
> standardize more than one, or even all three, lattice KEMs.

Covering both sides of the current patent issues would be much more
easily handled by simply standardizing NTRU Prime, which from the
beginning of round 1 has been balanced between Streamlined NTRU Prime
and NTRU LPRime, with extensive sharing of work across the options,
notably in the parameter choices, security analysis, software, testing
(which is where most of NIST's long-term work goes), and verification.

However, given NIST's patent-buyout failures, if we don't hear soon that
the ongoing litigation has won its appeal then the lattice field should
be narrowed to Streamlined NTRU Prime and the NTRU submission. Sure,
there are arguments that Kyber and SABER are better, but there are also
arguments that they're worse, and none of these arguments matter if we
can't get rid of the elephant in the room. Spreading effort across many
options exacerbates many other risks, and the potential value of extra
options goes way down when those options are hobbled by patent threats.

---Dan
signature.asc

Blumenthal, Uri - 0553 - MITLL

unread,
Oct 17, 2021, 10:11:20 AM10/17/21
to pqc-...@list.nist.gov
To save me digging time - could you please remind if NTRU Prime parameter sets include NIST Security Level 5, and if so - which ones?

Thanks

D. J. Bernstein

unread,
Oct 17, 2021, 1:54:10 PM10/17/21
to pqc-...@list.nist.gov
Blumenthal, Uri - 0553 - MITLL writes:
> To save me digging time - could you please remind if NTRU Prime
> parameter sets include NIST Security Level 5, and if so - which ones?

Here's the chart of size/speed numbers and Core-SVP levels:

https://ntruprime.cr.yp.to/speed.html

You're looking for the largest specified parameter sets, sntrup1277 and
ntrulpr1277. These are Core-SVP 2^270 and 2^271, higher than the 2^260
from firesaber and the 2^256 from kyber1024.

---Dan

P.S. You probably noticed the beginning of this thread claiming that
"the current NTRU scheme has yet to present a level-5 parameter set",
but NTRU team email to pqc-forum dated 21 Jun 2021 09:50:26 -0400 looks
to me like it's presenting two such parameter sets (Core-SVP 2^301 and
2^310) and a link to reference software for that parameter set. Putting
together an optimized implementation isn't rocket science at this point.

P.P.S. In light of recent advances in lattice attacks, you may wish to
consider asking for even higher security levels, say Core-SVP 2^1000.
signature.asc

Blumenthal, Uri - 0553 - MITLL

unread,
Oct 17, 2021, 5:47:10 PM10/17/21
to D. J. Bernstein, pqc-...@list.nist.gov
On 10/17/21, 13:54, "D. J. Bernstein" <pqc-...@list.nist.gov on behalf of d...@cr.yp.to> wrote:

> Blumenthal, Uri - 0553 - MITLL writes:
> > To save me digging time - could you please remind if NTRU Prime
> > parameter sets include NIST Security Level 5, and if so - which ones?
>
> Here's the chart of size/speed numbers and Core-SVP levels:
>
> https://ntruprime.cr.yp.to/speed.html

Thank you!

> You're looking for the largest specified parameter sets, sntrup1277 and
> ntrulpr1277. These are Core-SVP 2^270 and 2^271, higher than the 2^260
> from firesaber and the 2^256 from kyber1024.

Yes, this appears to be acceptably-high level, thanks.

> P.S. You probably noticed the beginning of this thread claiming that
> "the current NTRU scheme has yet to present a level-5 parameter set",

As indeed there was no such set until last June. ;-)

> but NTRU team email to pqc-forum dated 21 Jun 2021 09:50:26 -0400 looks
> to me like it's presenting two such parameter sets (Core-SVP 2^301 and
> 2^310) and a link to reference software for that parameter set. Putting
> together an optimized implementation isn't rocket science at this point.

And their "important" sizes (i.e., what must go over the wire) are 1842 bytes of ciphertext, and same for public key. Larger than Kyber1024 by 274 bytes, but smaller than ntrulpr1277 by 133 bytes (ciphertext), and mere 5 bytes for pubkey...

> P.P.S. In light of recent advances in lattice attacks, you may wish to
> consider asking for even higher security levels, say Core-SVP 2^1000.

Do you *really* think it's justified? The attacks seem quite limited in both applicability and scope...?

Thanks

daniel.apon

unread,
Oct 18, 2021, 4:29:55 PM10/18/21
to pqc-forum, u...@ll.mit.edu, D. J. Bernstein

The most advanced attacks against lattice-based KEMs announced in a talk or published in a paper to-date, including e.g. https://eprint.iacr.org/2021/1384.pdf, do not appear close to (even in the ballpark of) threatening any Round 3 candidate's security.

--Daniel

Vadim Lyubashevsky

unread,
Oct 19, 2021, 9:27:50 AM10/19/21
to pqc-forum
On Sun, Oct 17, 2021 at 3:01 PM D. J. Bernstein <d...@cr.yp.to> wrote:
>
> Vadim Lyubashevsky writes:
> > But a second, perhaps even easier solution, would be to simply
> > standardize more than one, or even all three, lattice KEMs.
>
> Covering both sides of the current patent issues would be much more
> easily handled by simply standardizing NTRU Prime,

If we're going to not follow the process, then we might as well create
an "optimal" scheme. The problem with only NTRU-HRSS or NTRUPrime
being standardized is that instantiating the original NTRU with
NTT-friendly rings produces a *much* better scheme (15X better, as
mentioned before). And there will be scenarios where people will want
to use it and we'll be back to the standardization process. Not
following the process in order to end up with a non-optimal scheme
does not make too much sense.

If we are going to follow the process, then let's standardize all 3
finalists and let it be sorted out later. At the very least, we will
have schemes which are (nearly) optimal as standards, and whenever
people are ready to use them, they can. And as an aside, there has
been virtually no public work on side-channel resistance for NTRU
schemes, and so IP can pop up there as well ... and so the more things
are standardized the less valuable IP is and the more likely we are to
be able to dodge it.

The one thing that we really want to avoid at this point is being back
to the standardization process because of the existence of much better
schemes or because some schemes are not usable. It seems like
over-standardizing a bit might be the only simple solution to this.

Best,
Vadim

P.S. With regards to the NTRU Level 5 parameter set, I did not say
that it wasn't proposed (we even mentioned it in Table 3 of
https://eprint.iacr.org/2021/1352.pdf), but that it wasn't
implemented. Since varying security levels in NTRU / NTRUPrime
requires significantly more effort than in Kyber/Saber, this is not a
completely trivial matter and it would need some time to be looked at
by the community.




> which from the
> beginning of round 1 has been balanced between Streamlined NTRU Prime
> and NTRU LPRime, with extensive sharing of work across the options,
> notably in the parameter choices, security analysis, software, testing
> (which is where most of NIST's long-term work goes), and verification.
>
> However, given NIST's patent-buyout failures, if we don't hear soon that
> the ongoing litigation has won its appeal then the lattice field should
> be narrowed to Streamlined NTRU Prime and the NTRU submission. Sure,
> there are arguments that Kyber and SABER are better, but there are also
> arguments that they're worse, and none of these arguments matter if we
> can't get rid of the elephant in the room. Spreading effort across many
> options exacerbates many other risks, and the potential value of extra
> options goes way down when those options are hobbled by patent threats.
>
> ---Dan
>
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20211017130116.685666.qmail%40cr.yp.to.

D. J. Bernstein

unread,
Oct 19, 2021, 2:30:37 PM10/19/21
to pqc-...@list.nist.gov
> Since varying security levels in NTRU / NTRUPrime
> requires significantly more effort than in Kyber/Saber, this is not a
> completely trivial matter and it would need some time to be looked at
> by the community.

Nope. The work is already automated, and everyone can already see
size-security graphs for a ton of NTRU and NTRU Prime parameters here:

* pages 28, 37, and 38 of NTRU round-3 submission document
* pages 104--111 of https://ntruprime.cr.yp.to/nist/ntruprime-20201007.pdf

At least for NTRU Prime, the official reference code and AVX2-optimized
code for all sizes is automatically generated from a unified code base.

As for "more effort than in Kyber/Saber", the reader expects this to
mean that Kyber and SABER can easily vary security levels. Let's look at
the facts:

* After Kyber's security argument for round-2 Kyber-512 was shredded
in pqc-forum email dated 30 May 2020 02:15:31 +0200, the Kyber
submission went through fascinating contortions to claim 7 bits
more security by combining a new round-3 Kyber-512 cryptosystem
with a new security analysis.

* Kyber and SABER force dimensions to be multiples of 256, leaping
from 512 to 768 to 1024 with no possibility of supporting
intermediate sizes---and with increasingly severe performance
problems after 1024.

Doesn't look like easy-to-vary security levels. Meanwhile the selection
of submitted NTRU Prime parameter sets already covers a spectrum of
intermediate dimensions that Kyber and SABER structurally can't match.

---Dan
signature.asc

Vadim Lyubashevsky

unread,
Oct 19, 2021, 6:15:26 PM10/19/21
to pqc-forum
On Tue, Oct 19, 2021 at 8:30 PM D. J. Bernstein <d...@cr.yp.to> wrote:
>
> * After Kyber's security argument for round-2 Kyber-512 was shredded
> in pqc-forum email dated 30 May 2020 02:15:31 +0200, the Kyber
> submission went through fascinating contortions to claim 7 bits
> more security by combining a new round-3 Kyber-512 cryptosystem
> with a new security analysis.

A very colorful description for Kyber-512 simply increasing the noise
rate of one of the secrets and gaining 6 bits of the usual core-SVP
hardness under the assumption that rounding adds noise. The only
fascinating thing was how the claim of an extra 6 bits turned into a
40 message thread (the number 40 actually hides the absolutely
impressive number of paragraphs in each of your messages) in which you
accused Kyber of being insecure, infringing on patents, and being part
of an NSA conspiracy
(https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/NSe0wAzKJtA).
Actually, I am not sure why no one pointed out that the latter two are
contradictory -- i.e. the NSA, as part of its plans to push scheme X,
would surely take care of the patent situation by making the patent
owners release their IP ... and then perhaps even try to block
schemes Y and Z by accusing them of infringing on patents. Wait ...
so isn't ... X=NTRU? I am obviously joking, but things of this
caliber were part of that fascinating thread (and unfortunately of
many other threads as well).

>
> * Kyber and SABER force dimensions to be multiples of 256, leaping
> from 512 to 768 to 1024 with no possibility of supporting
> intermediate sizes---and with increasingly severe performance
> problems after 1024.

We believe that adding about 64 bits of hardness at every
discretization is a worthwhile trade-off for being able to very easily
vary security according to this discretization, and to also have
security based on a problem that is closer to LWE. But of course, one
is entitled to a different opinion.

But "increasingly *severe* performance problems after 1024" is not
correct. Doubling the dimension of Kyber increases the running time by
less than 4X. So even for dimension 4096, where today we believe we
have > 1000 bits of security, Kyber would still be faster than
NTRU(Prime). If we need even more dimensions and the quadratic
asymptotics induced by the module structure really start to kick in,
then there are probably bigger issues to worry about.

Vadim.

D. J. Bernstein

unread,
Oct 20, 2021, 10:20:03 AM10/20/21
to pqc-...@list.nist.gov
Vadim, to clarify the record, please retract the false claim that "Since
varying security levels in NTRU / NTRUPrime requires significantly more
effort than in Kyber/Saber, this is not a completely trivial matter and
it would need some time to be looked at by the community". Also, please
try to refrain from burying a technical discussion under ad-hominem
attacks, no matter how fervently you believe what you're saying. Thanks
in advance.

---Dan
signature.asc

D. J. Bernstein

unread,
Oct 24, 2021, 3:17:01 AM10/24/21
to pqc-...@list.nist.gov
> > P.P.S. In light of recent advances in lattice attacks, you may wish to
> > consider asking for even higher security levels, say Core-SVP 2^1000.
> Do you *really* think it's justified?

If you can afford it, certainly! For lattice-based cryptography, the
attack picture is rapidly changing, and the risk-assessment mechanisms
are deeply flawed. See, e.g., the amazing series of broken "barrier"
claims cited in Appendix A.2 of https://cr.yp.to/papers.html#spherical.

If your application has size constraints that would rule out, e.g., 8KB
ciphertexts, or other potentially relevant performance constraints, then
it'd be great to have pointers to publicly verifiable information about
the requirements so that they can be taken into account. For TLS we've
seen

https://www.imperialviolet.org/2018/04/11/pqconftls.html

reporting software-interoperability problems with keys approaching 4KB,
although it's not clear that this is a long-term issue.

---Dan
signature.asc
Reply all
Reply to author
Forward
0 new messages