Patent-buyout updates

1,147 views
Skip to first unread message

D. J. Bernstein

unread,
May 17, 2021, 6:09:44 AM5/17/21
to pqc-...@list.nist.gov
In email dated 11 Dec 2020 16:08:14 +0100 I wrote the following:

> The word "critical" appears exactly once in the
> NISTPQC call for proposals:
>
> NIST believes it is critical that this process leads to cryptographic
> standards that can be freely implemented in security technologies and
> products.
>
> This is in Section 2.D, "Intellectual Property Statements / Agreements /
> Disclosures". NIST appears to have tried to collect statements from
> submitters regarding their own patents on their own submissions; this is
> helpful, and seems authoritative, but it doesn't make clear that a
> submission "can be freely implemented". Sometimes submissions are
> covered by patents from other people.
>
> Patents are also included in the call for proposals under evaluation
> criterion 4.C.3, "Adoption", which broadly considers all factors "that
> might hinder or promote widespread adoption of an algorithm or
> implementation", and names "intellectual property" as an example. Again
> the statements from submitters regarding their own patents on their own
> submissions are not sufficient for evaluating this.
>
> NISTPQC has already established a track record of mistakes even within
> the technical areas of expertise of the submitters and evaluators. It's
> not reasonable to imagine that evaluations of patent threats will have a
> zero error rate. It's important to have procedures in place to recognize
> and correct errors in evaluations of patent threats, starting with a
> rule of detailed public analyses.
>
> As an analogy, NISTPQC efficiency claims are subjected to detailed
> public reviews, even when it's clear that the specific claims matter for
> only a narrow (and shrinking) corner of the user base. When two patents
> have been identified that can each singlehandedly destroy >99% of the
> potential usage of Kyber et al. between now and the early 2030s, we
> should be putting a correspondingly careful, publicly reviewed effort
> into establishing the magnitude and boundaries of the threat.
>
> NIST IR 8309 says that if "intellectual property issues threaten the
> future of KYBER and SABER" then "NTRU would be seen as a more appealing
> finalist"---but hides its _reasons_ for saying this. Readers are misled
> into thinking this is a purely hypothetical issue. Readers who already
> know better aren't being given the opportunity to see and comment on the
> NIST handling of patent issues. Given the (obviously intentional) lack
> of transparency regarding such an important issue, I've filed a FOIA
> request for metadata regarding NIST's secret patent discussions, after
> careful consideration of the potential consequences of such a request.
>
> Patent problems, like efficiency problems, are sometimes solved. There
> are occasional rumors of efforts to solve NISTPQC patent problems. This
> is _not_ an argument against public evaluations of the problems that
> currently exist. We should publicly evaluate the dangers to users _and_
> publicly evaluate the chance of the dangers going away. If they go away,
> great; if they don't, we know how bad they are; either way, we're
> putting due diligence into understanding the issues.

The FOIA request mentioned above was filed 2020.12.07, and asked for the
following records:

> My understanding is that NIST has identified and/or learned about
> various patents that apply or potentially apply to various round-3
> submissions to the NIST Post-Quantum Cryptography Standardization
> Project (NISTPQC). My understanding is that NIST then engaged in
> communications with patent holders regarding some of these patents.
>
> I am hereby requesting (1) the list of patent numbers for all patents
> mentioned above, (2) for each patent number, a list that for each
> round-3 NISTPQC submission summarizes NIST's categorization of the
> applicability and/or potential applicability of the patent to the
> submission, and (3) a list showing the date, time, sender, receiver(s),
> format (e.g., "email" or "telephone"), and connected patent number(s)
> for each of the communications mentioned above.
>
> I also request the same information regarding patent applications; i.e.,
> "patents" here should be understood to include patent applications.
> Furthermore, to be clear, this request is not limited to U.S. patents.
>
> I request the above information in electronic form.
>
> Regarding #1, I am already aware of some "IP statements" listed in
>
> https://csrc.nist.gov/Projects/post-quantum-cryptography/round-1-submissions
> https://csrc.nist.gov/Projects/post-quantum-cryptography/round-2-submissions
> https://csrc.nist.gov/Projects/post-quantum-cryptography/round-3-submissions
>
> but I am concerned that these are incomplete and/or outdated, and as far
> as I know they are limited to patent claims by submitters regarding
> their own submissions, so these don't answer my request.

NIST finally answered on 2021.05.04 (very far past the FOIA deadlines),
refusing to provide 3 documents totaling 31 pages (perhaps "secret law"
in violation of FOIA---to be determined) but providing _some_ dribbles
of interesting information.

To review some of the relevant background:

* https://patents.google.com/?q=%22post-quantum%22&oq=%22post-quantum%22
shows more than 1000 results today---plus there's no requirement
for a post-quantum patent to use the phrase "post-quantum". Some of
the patents are outside the NISTPQC scope but many of them are
clearly within the NISTPQC scope.

* A patent filed 2018 or later (or 2019 or later in the case of U.S.
patents by the submitters) can't apply to round-1 submissions
published in December 2017, but _can_ apply to subsequent tweaks,
implementation speedups, side-channel protections, etc. When NIST
factors tweaks and speedups and so on into its decisions, does it
first investigate whether those things are patented?

Furthermore, some people and companies started earlier. ISARA has
been filing one post-quantum patent after another since 2016.

* Patent holders typically try to maximize their income by waiting
for deployment by companies with big pockets and then asking those
companies for money. For example, while Google was running its
CECPQ1 experiment using NewHope, Ding reportedly contacted Google
for payment for his patent 9246675 on noisy DH + compressed
reconciliation. (Google insists that it concluded the experiment
for other reasons. The labeling as an "experiment" might have saved
Google from any liability here, but isn't helpful for those of us
who are trying to get users protected as soon as possible.)

Decisions to avoid deploying patented work in the first place
directly interfere with this income, so patent holders typically
try to prevent such decisions by staying quiet about their patents
until deployment happens. In theory, publication of patents serves
as a warning to the public; in reality, this mechanism has very low
reliability given the number of patents, the difficulty of reading
patents, and the limited resources available for review. Ding
didn't stay quiet about his patent but Gaborit and Aguilar Melchor
stayed quiet for _eight years_ about their broader patent 9094189
on noisy DH + reconciliation. (As a side note, a patent on noisy DH
+ reconciliation doesn't stop a subsequent patent on noisy DH +
compressed reconciliation.)

* NISTPQC took a useful step of requiring submitters to reveal their
patents on their own submissions. Submitters violating this rule
would have been putting their own patents at risk via what's called
"estoppel" in U.S. courts. Extra work to review the declared
patents then led to public knowledge of the danger from 9094189:

https://twitter.com/hashbreaker/status/995111649982939136

However, this process obviously doesn't eliminate the broader risk
of patent problems.

* Once a patent has been identified, one can try to get rid of the
patent through litigation. However, this is expensive; the
procedures are heavily tilted towards patent holders; and
occasionally the patent holders really were first and would win
even without the tilt in the procedures.

British law firm Keltie, on behalf of a secret client, brought a
round of litigation against the European version of 9094189
starting in January 2017, and totally failed to kill the patent.
_Maybe_ Keltie will do better on appeal, and the document list

https://register.epo.org/application?number=EP11712927&lng=en&tab=doclist

shows that a hearing on the appeal is now scheduled for November
2021. (Isn't NIST talking about making standardization decisions
before that?)

* One can also try to get rid of a patent through buyouts. For years
there have been rumors of NIST trying to buy out several of the
most worrisome patents.

Meanwhile patent holders see estimates of their market value going
up and up---consider, e.g., the June 2020 statement "Inside
Quantum’s Post-Quantum Cryptography Report Pegs PQC Market at $9.5
Billion in 2029"---and it's easy to imagine that NIST simply can't
afford to buy out the most important post-quantum patents. Sure,
hope springs eternal, but optimism is not a substitute for
competent risk management.

* One can also try to avoid anything threatened by patents. However,
this requires understanding the details of what one needs to avoid,
and this can be tricky. As I wrote in December: "It's not
reasonable to imagine that evaluations of patent threats will have
a zero error rate. It's important to have procedures in place to
recognize and correct errors in evaluations of patent threats,
starting with a rule of detailed public analyses."

Subsequent public discussions of patents have been full of errors:
e.g., pushing two definitions of "reconciliation" that turn out to
contradict each other; claiming that the patent by Gaborit and
Aguilar Melchor was "likely" to be "invalidated" while ignoring
Keltie's failure to invalidate that patent; and making exactly the
mistake that the Supreme Court had specifically rejected in _Festo
v. Shoketsu_, 535 U.S. 722 (2002). The community is obviously very
far from consensus on the scope of the threat.

* NIST posted the IP statements three years ago. If there had been
coordinated public effort starting at that point to analyze the
scope of the known threat then I think the most important parts of
the analysis would have been settled by now.

However---even though the call for proposals had described free
usability as "critical"---NIST started trying, with considerable
success, to delay and deter public analysis of the patent threats.
I don't understand why.

Now let's look at the FOIA results. One interesting aspect of NIST's
response is that it lists only _one_ patent family, the 9094189 family.
NIST admits one thread of negotiations regarding that family, and that's
it. This is quite different from the narrative that NIST is on top of
things and is on the verge of buying out all relevant patents. For
comparison, https://ntruprime.cr.yp.to/faq.html says

There are known patent threats against the "Product
NTRU"/"Ring-LWE"/"LPR" lattice proposals: Kyber, SABER, and NTRU
LPRime (ntrulpr). These proposals use a "noisy DH + reconciliation"
structure that appears to be covered by U.S. patent 9094189 expiring
2032, and a 2x ciphertext-compression mechanism that appears to be
covered by U.S. patent 9246675 expiring 2033. There are also
international patents, sometimes with different wording.

Both of these patents "apply or potentially apply to various round-3
submissions", so they were within the scope of the FOIA request. Why
wasn't 9246675 included in NIST's "list of patent numbers"? Where's the
metadata regarding NIST's communication with Ding regarding this patent?
It's hard to believe that there hasn't been any communication. I also
haven't seen an announcement of a successful buyout of the 9246675
patent family: the family is still there, and if we believe NIST's FOIA
response then NIST isn't doing anything about this.

And what about ISARA? ISARA stated in email dated 31 May 2019 16:39:10
+0000 that it "had the opportunity to talk to NIST" and would be
"working together with NIST to provide a royalty-free grant to all
schemes in the NIST competition". I expressed concern:

> This sounds great if it actually happens. However, I'm concerned about
> the following scenario:
>
> * The _hope_ of free use of the patents leads the patents to be given
> lower weight in selections than they would normally be given.
>
> * Negotiations between NIST and ISARA drag on, and eventually it
> turns out that NIST can't afford ISARA's buyout price.
>
> * The selections thus end up more tilted towards ISARA's patents than
> they otherwise would have been.
>
> * Users ask, quite reasonably, why patents weren't assigned a higher
> weight in the decision-making process.
>
> Is there a more specific timeframe for "will be working together"?

ISARA responded that it wasn't asking NIST for any money and that
"discussions started Friday". This removes any possible ambiguity of the
word "opportunity"---ISARA and NIST definitely communicated regarding
ISARA's patents. But I haven't seen any subsequent announcement of a
royalty-free grant.

So why didn't ISARA's patents appear in NIST's FOIA response? _Maybe_
ISARA has disclaimed all applicability of its patents to round-3
submissions---but then shouldn't NIST have been proudly announcing that
its negotiations with ISARA were no longer necessary?

As for 9094189, there's another interesting aspect of NIST's response,
namely the statement "Applicability of patents is claimed to be for PQC
Submissions of: CRYSTALS-KYBER SABER". Compare this to

https://twitter.com/hashbreaker/status/1279670996706836483

(from round 2) saying the patent "covers subsequent LPR cryptosystem and
derivatives such as Kyber, LAC, NewHope, NTRULPR, Round5, Saber,
ThreeBears"; and compare to the FAQ (from round 3) quoted above.
Discussion of this patent on pqc-forum has been similarly broad.

It's unclear how NIST ended up with a narrower description of the
"claimed" patent applicability. One possibility is that this comes from
the patent holder---but the patent holder's incentives here are out of
whack with the public incentives, and without seeing the exact wording
of the discussions we have no reason to think that NIST being misled
here would cause any estoppel issues.

It's also unclear what exactly NIST is trying to buy here. Here's a
future scenario to consider:

* NIST succeeds in allocating enough money to obtain a license
specifically for its favorite, Kyber.

* Further news regarding advances in cyclotomic attacks convinces
NIST that standardization of Kyber is dangerous. (NIST has already
recognized this as a possibility.)

* NIST asks for an emergency extension of the license to cover
alternatives. The patent holder demands much more money.

NIST has already been portraying the NTRU submission as an inferior
choice, something that it might be forced into as a result of patents
but doesn't really want; a limited license might reduce this tension but
wouldn't resolve it. I should note here that the case for Product NTRU
(Kyber, SABER, NTRU LPRime) over Quotient NTRU (the NTRU submission and
Streamlined NTRU Prime) consists almost entirely of misinformation, in
part pushed by NIST, presumably increasing the buyout price. There _are_
some facts that favor Product NTRU, but there are also some facts that
favor Quotient NTRU, and it's very far from clear that Product NTRU
would be the best choice in a patent-free world.

(It's also disturbing to realize that a successful buyout would give
NIST an extra incentive, having nothing to do with the public interest,
to select something covered by the buyout. A report saying "we spent
this money to enable free use of this system in case we decided to
standardize it" sounds good but a report saying "we spent this money to
enable free use of this standard" sounds even better. Yes, people often
make decisions by looking ahead to what will sound best in reports.)

Another interesting aspect of NIST's response is a 2021.01.19 statement
"We're still mostly waiting for them to give us a number"---presumably
"a number" meaning the amount of money that the 9094189 patent holders
would accept in order to sell a limited license, or the whole patent, or
whatever exactly it is that NIST is asking for. It's not clear from
NIST's response how many years negotiations have been continuing.

---Dan
signature.asc

Moody, Dustin (Fed)

unread,
May 19, 2021, 9:27:18 AM5/19/21
to D. J. Bernstein, pqc-forum

IPR issues have been an explicit evaluation criteria since the beginning of the PQC standardization process. Our efforts in this area are ongoing; we are aware of the issues.  It is certainly a complex situation.  


We would appreciate any feedback from the community on this issue, including the size of role that patents/IPR issues should play in our final selection process.


Thanks,

Dustin Moody

NIST



[Also -- a reminder about the signed IP statements.  If it hasn't already been done, any new candidate team members for the 3rd round need to send us signed IP statements (not digital scans).  The same is true of any submission team whose IP situation has changed since they gave us their original IP statements.  See section 2.D of the Call for Proposals for detailed instructions.    Scans of what we have received can be found at the Round 3 Submissions page.]




From: pqc-...@list.nist.gov on behalf of D. J. Bernstein
Sent: Monday, May 17, 2021 6:09 AM
To: pqc-forum
Subject: [pqc-forum] Patent-buyout updates

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20210517100927.192194.qmail%40cr.yp.to.

Blumenthal, Uri - 0553 - MITLL

unread,
May 19, 2021, 10:09:32 AM5/19/21
to pqc-forum

IPR issues have been an explicit evaluation criteria since the beginning of the PQC standardization process. Our efforts in this area are ongoing; we are aware of the issues.  It is certainly a complex situation.  

 

We would appreciate any feedback from the community on this issue, including the size of role that patents/IPR issues should play in our final selection process.

 

IMHO, patent/IPR issues should play critical role – being a show-stopper/deal-breaker.

 

Thanks!

--

Regards,

Uri

 

There are two ways to design a system. One is to make is so simple there are obviously no deficiencies.

The other is to make it so complex there are no obvious deficiencies.

                                                                                                                                     -  C. A. R. Hoare

Watson Ladd

unread,
May 19, 2021, 10:33:14 AM5/19/21
to Moody, Dustin (Fed), D. J. Bernstein, pqc-forum
On Wed, May 19, 2021 at 6:27 AM 'Moody, Dustin (Fed)' via pqc-forum
<pqc-...@list.nist.gov> wrote:
>
> IPR issues have been an explicit evaluation criteria since the beginning of the PQC standardization process. Our efforts in this area are ongoing; we are aware of the issues. It is certainly a complex situation.
>
>
> We would appreciate any feedback from the community on this issue, including the size of role that patents/IPR issues should play in our final selection process.

Speaking only for myself: we lived this with RSA and ECC, and in a
less big way with PAKE. Having patents that were assertable created
significant legal risk perceived or real that dramatically slowed
adoption. Red Hat in particular moved very slowly because of these
fears. Even with the NSA Certicom deal ECC remained mostly
unimplemented until the 2005, and was not widely adopted until a
decade later, in part because of a need for more efficiency, in part
because of patent fears slowing deployment.

Given that the original NTRU paper is nearly 30 years ago and the
McElice paper older, having a patent minefield isn't inevitable and
it's very important to ensure we adopt post quantum crypto rapidly and
everywhere.

Sincerely,
Watson Ladd
>
>
> Thanks,
>
> Dustin Moody
>
> NIST
>
>
>
> [Also -- a reminder about the signed IP statements. If it hasn't already been done, any new candidate team members for the 3rd round need to send us signed IP statements (not digital scans). The same is true of any submission team whose IP situation has changed since they gave us their original IP statements. See section 2.D of the Call for Proposals for detailed instructions. Scans of what we have received can be found at the Round 3 Submissions page.]


--
Astra mortemque praestare gradatim

Blumenthal, Uri - 0553 - MITLL

unread,
May 19, 2021, 10:53:47 AM5/19/21
to Watson Ladd, pqc-forum
> > We would appreciate any feedback from the community on this issue, including the size of role that patents/IPR issues should play in our final selection process.
>
> Speaking only for myself: we lived this with RSA and ECC, and in a
> less big way with PAKE.

Yes, and neither RSA nor ECC were adopted in the Internet standards or (widely?) used until the related patents expired. Which delayed RSA acceptance by about 20 years.

> Having patents that were assertable created significant legal risk perceived or
> real that dramatically slowed adoption.

Exactly!

Which means (to me) that if we want the new standards adopted (or at least adoptable) in the fairly near future - they better have clear royalty-free licenses.

While I value algorithm efficiency as much as the next guy - IPR-unencumbered wins every time.

And, of course, I'm speaking only for myself. __

Markku-Juhani O. Saarinen

unread,
May 19, 2021, 1:17:54 PM5/19/21
to pqc-forum, u...@ll.mit.edu, pqc-forum, watso...@gmail.com

On Wed, May 19, 2021 at 6:27 AM 'Moody, Dustin (Fed)' via pqc-forum
<pqc-...@list.nist.gov> wrote:
>
> IPR issues have been an explicit evaluation criteria since the beginning of the PQC standardization process. Our efforts in this area are ongoing; we are aware of the issues.  It is certainly a complex situation.
(..)

> We would appreciate any feedback from the community on this issue, including the size of role that patents/IPR issues should play in our final selection process.


Dustin et al,

In addition to algorithms themselves, the problems can also arise from obvious *use cases* being patented or tainted by patent applications. Consider US20190319796A1:

"Low latency post-quantum signature verification for fast secure-boot."
S. Ghosh et al, Intel. https://patents.google.com/patent/US20190319796A1/

This application seems to have been filed while the Hash-Based Signature (HBS) standardization process was already ongoing. It mainly talks about using XMSS in one of the most obvious use cases of HBS. Regardless of its merits, I am now hesitant to recommend XMSS as a solution to customers wanting quantum-resilient firmware security, despite it being both a NIST and an IETF standard (well, at least to some degree, informational RFC, etc).

Yes, I am aware of the prior art, but still -- it's there, I have no desire of being targeted by Intel lawyers, and there are other options. SP 800-208 describes not only XMSS(^MT), but LMS, HSS too, and all of these were approved last October. This may one of the main reasons why LMS/HSS seems like a preferred choice (for this particular use case) in the industry. As an example, RFC 9019 ("A Firmware Update Architecture for Internet of Things") only mentions LMS/HSS.

NIST has other standards and technical reports discussing firmware updates (e.g. SP 800-193), but those do not include very specific algorithm guidance. If 800-193, FIPS 140-3 IG, or some other place said "you can use XMSS for digitally signing firmware updates!" I'd certainly be relieved.

- Since none of the US20190319796A1 inventors are XMSS designers I assume that the patent application didn't need to be disclosed during the standardization process (SP 800-208 itself has a section about patent disclosures).

- On the other hand, the designers of LMS/HSS have prior art specifically in this use case, and I have no reason to believe that they have secret IP related to LMS/HSS that they didn't disclose to either NIST and IETF (RFC 8554).

I think this goes as an example of why it may be preferable to standardize multiple options "of almost the same thing" if similar security assurance is there for all of those options. Convergence to one of those options may happen for reasons unrelated to security, but at least the options are there.

Personal opinions only, not a lawyer, etc.

Cheers,
- markku

Dr. Markku-Juhani O. Saarinen <mj...@pqshield.com> PQShield, Oxford UK.

D. J. Bernstein

unread,
May 19, 2021, 2:58:17 PM5/19/21
to pqc-...@list.nist.gov
Markku-Juhani O. Saarinen writes:
> Yes, I am aware of the prior art, but still -- it's there, I have no
> desire of being targeted by Intel lawyers, and there are other options.

Care is required in analyzing what the other options will be, assuming
the application turns into a patent. Procedurally, (1) claims can be
modified before issuance, and (2) patents are extended after issuance by
the doctrine of equivalents. For this particular application, the claims
are currently just for XMSS combined with "a low latency SHA3 hardware
engine", but the application mentions LMS too.

> I think this goes as an example of why it may be preferable to
> standardize multiple options "of almost the same thing" if similar
> security assurance is there for all of those options.

Wouldn't it be better to figure out what's covered by patents and then
focus efforts on what isn't? More work, sure, but then we can actually
go ahead with deployment!

If a candidate publishes its last tweak in month M then it's inherently
immune to any patent application filed after month M, and thus to any
patent application published after month M+18. There's also serious time
and effort to review the patent applications before month M+18 and
figure out what they cover---this requires reading many obfuscated pages
in patent files, understanding the technical details, and understanding
the rules for how patents are interpreted, but on the bright side this
work can start long before month M+18, since it's not as if all the
patent applications are suddenly appearing in the last month.

Sure, someone can still try to patent things like "TLS plus this KEM"
or "this KEM plus a hardware accelerator for these components" or "TLS
plus this KEM plus a hardware accelerator for these components", trying
to fool the patent office into believing that these aren't already known
to be interchangeable parts. Publications clearly spelling out the
obvious interchangeability of things can go a long way in stopping such
patents, but have to fight against publications that claim novelty for
particular combinations---academic pressures towards hyping trivial
results tend to have the side effect of making trivial patents easier to
get. The best defense is to publish full post-quantum stacks as soon as
possible, covering as many variants as possible, and relax only if no
relevant landmines have appeared within 18 months.

---Dan
signature.asc

Panos Kampanakis (pkampana)

unread,
May 19, 2021, 4:32:31 PM5/19/21
to Moody, Dustin (Fed), pqc-...@list.nist.gov

+1 on prioritizing IPR free algorithms. We too have been trying to away from algorithms with IPR claims as much as we can. Algorithms integrated in standards would rather be open. NIST has followed that rationale and has had great success in the past. As already mentioned by others, on the other hand certain companies’ IPR claims have hindered very efficient primitives from being deployed and adopted for years.

 

And I would suggest submitters to consider opening up their potential IPR. Imo, the bragging rights of having a standardized algorithm used for years is greater than the benefit of having an algorithm which is not used or standardized but theoretically could generate lots of revenue 😉

 

Rgs,

Panos

Vadim Lyubashevsky

unread,
May 20, 2021, 7:22:48 AM5/20/21
to pqc-...@list.nist.gov, Moody, Dustin (Fed)
Dear all,

Of course we want IPR-free standards, and luckily the current situation is different from the examples from the past that some other posters brought up. In the case of RSA, the very foundation of the scheme was patented by the inventors. For lattice crypto (and probably isogenies too), however, I can confidently say that the currently-active patents made zero scientific contributions to any of the finalist/alternate schemes. Because the foundations of practical lattice crypto are not patented, the active patents are forced to be narrow and so one cannot simply make blanket statements like, for example, that because Google's NewHope experiment infringed on a reconciliation patent, other LWE-like schemes are in severe danger.

While Dan likes to tout that a CNRS patent has withstood one round of a lawsuit that tried to kill it, he leaves out the important fact that the patent does not apply to Kyber/Saber/Frodo because those schemes are *non-commutative* (whereas NewHope and NTRU-LPRime are). We had a long technical discussion about this on this board, and this fact has even been affirmed in court by the CNRS lawyers during that same trial (see 3.21 on page 7 of https://register.epo.org/application?documentId=E2T6283H7805DSU&number=EP11712927&lng=en&npl=false). From reading the full proceedings, it seems that they had to do this in order to rescue their main claim which is simply incorrect without assuming commutativity.  Similarly, Ding's patent covers improved reconciliation, but does not cover improved public key encryption, which is what all the finalists/alternates use (the NewHope version in the original Google experiment, on the other hand, did use reconciliation).  This is evidenced by the fact that in that very same patent he uses standard less-efficient public key encryption for another application. I of course agree that anything can happen in a court where non-experts get to decide on technical topics, but my point is that these patents don't seem any more scientifically relevant than the dozens of "running this KEM on a faster computer will make it faster" type patents that are out there and will continue to be pumped out and could put every scheme in some sort of danger.  

One way to be somewhat safer is to be cryptograhically agile -- that is, make sure that it's easy to swap between different schemes. Then if NIST follows Markku's suggestion and standardizes a few similar things, there will be less incentive for patent lawsuits because one can switch to another scheme rather than paying. There are definitely pros and cons to this approach, and it should be debated. I would also like to see NIST and the community actively discourage IP trolling by emphatically, and publicly, refusing to pay anything unless it's just some small token amount (I am not categorically against patents -- just the parasitic ones which don't actually contribute anything to science ... so RSA would have been perfectly acceptable to pay for.)  Instead, we should go the Cloudflare route (https://techcrunch.com/2021/04/26/cloudflare-rallies-the-troops-to-fight-off-another-so-called-patent-troll/). One can't exactly replicate Cloudflare's move since not all of these institutions are purely patent trolls, but surely there could be creative ways at going after what they value.  

For example, CNRS, which is the largest publicly funded French research organization, is presumably trying to extract money from their patent (I am basing this on Dan's comment from his previous email where he said that NIST is waiting for a number, so I apologize if I am wrong and hope that in such case someone with more information will respond and clear this all up), while at the same time allowing this same patent to be used free of charge for *their* schemes BIKE and HQC (see pages 15,16, 26 of https://csrc.nist.gov/CSRC/media/Projects/post-quantum-cryptography/documents/round-3/updated-ip-statements/BIKE-Statements-Round3.pdf and the same thing for HQC https://csrc.nist.gov/CSRC/media/Projects/post-quantum-cryptography/documents/round-2/updated-ip-statements/HQC-Statements-Round2.pdf).  So apparently CNRS, and the researchers involved, value the publicity that could come from having their scheme be standardized, but are willing to impede other schemes even though they essentially admitted to their patent not being applicable to them. It's of course their right to do whatever they want, but it is certainly an ethically questionable way for a research institution to act. A possible way for NIST to fight back in this case, Cloudflare style, is to let it be known that this is not scientifically acceptable behavior and kick BIKE + HQC out of the standardization process.

Given the importance that many seem to place on IPR-free standards, I hope that NIST arranges to have a very open public discussion on this matter -- perhaps even disclosing who is asking for what -- at the upcoming workshop or at some other venue.

Best wishes,

Vadim

Rafael Misoczki

unread,
May 20, 2021, 12:31:07 PM5/20/21
to Vadim Lyubashevsky, Moody, Dustin (Fed), pqc-...@list.nist.gov
Dear all,   

The BIKE team understands that the aforementioned CNRS patent covered only BIKE-3, a specific variant which was abandoned by the team long ago.

Currently, the "BIKE proposal" as considered in the 3rd round of the NIST competition corresponds to BIKE-2, and is not affected by the aforementioned patent since it is an NTRU-like quotient approach.

Best regards, 
Rafael Misoczki on behalf of The BIKE Team

Vadim Lyubashevsky

unread,
May 20, 2021, 5:46:33 PM5/20/21
to Rafael Misoczki, Moody, Dustin (Fed), pqc-...@list.nist.gov
The BIKE team understands that the aforementioned CNRS patent covered only BIKE-3, a specific variant which was abandoned by the team long ago.

Currently, the "BIKE proposal" as considered in the 3rd round of the NIST competition corresponds to BIKE-2, and is not affected by the aforementioned patent since it is an NTRU-like quotient approach.

That's great!  So this patent is now equally inapplicable to BIKE, Frodo, Kyber, and Saber.  I guess you then agree that it's fair that whatever CNRS decides to do with the patent should have the same effect on all of them? :)  

Best,
Vadim

Christopher J Peikert

unread,
May 21, 2021, 12:36:24 AM5/21/21
to pqc-forum
It's clear that the IP context is very important.

Evaluations of this context should be based on an accurate understanding of what the various patents do and do not cover, the prior art, and how the remaining NIST PQC proposals actually operate. I am (blessedly) not a patent lawyer, but I do know these topics well.

This message summarizes and links to a lot of pertinent information on these matters (some new to this forum), and also corrects some serious inaccuracies of fact and analysis that have been advanced here.

The upshot: there is abundant evidence and analysis showing that the two patents Dan has cited should pose no plausible danger to Kyber, SABER, or FrodoKEM.

(Sadly, this has largely consumed my pqc-forum time budget for the present era. So, you likely won't hear much from me for a while, but I'll try to follow the replies.)

First, a couple brief non-technical matters.

On Mon, May 17, 2021 at 6:09 AM D. J. Bernstein <d...@cr.yp.to> wrote:
NIST finally answered on 2021.05.04 (very far past the FOIA deadlines),
refusing to provide 3 documents totaling 31 pages (perhaps "secret law"
in violation of FOIA---to be determined) but providing _some_ dribbles
of interesting information.

Is NIST's response publicly available somewhere? I didn't see a link in your message.

 * NIST posted the IP statements three years ago. If there had been
     coordinated public effort starting at that point to analyze the
     scope of the known threat then I think the most important parts of
     the analysis would have been settled by now.

     However---even though the call for proposals had described free
     usability as "critical"---NIST started trying, with considerable
     success, to delay and deter public analysis of the patent threats.
     I don't understand why.

This striking accusation requires evidence. Precisely how and when did NIST delay and deter public analysis, and with considerable success no less? This forum is public (and there is no shortage of others), and there has been a great deal of discussion about patents here over the years, none of it receiving any disapproval from NIST as far as I can see.

The oldest statement I can find from NIST on this forum about patent discussions is from 2018 Jan 9, in response to a specific patent-related question: "That's a good question. We are continuing to consult with our legal team about the best approach to take... We would also appreciate any comments from the community of course." Wow, such delaying! Much deterring!
 
Anyway, on to the main topic.

To review some of the relevant background: ...


     Subsequent public discussions of patents have been full of errors:
     e.g., pushing two definitions of "reconciliation" that turn out to
     contradict each other; claiming that the patent by Gaborit and
     Aguilar Melchor was "likely" to be "invalidated" while ignoring
     Keltie's failure to invalidate that patent; and making exactly the
     mistake that the Supreme Court had specifically rejected in _Festo
     v. Shoketsu_, 535 U.S. 722 (2002). The community is obviously very
     far from consensus on the scope of the threat.

This is a highly selective and misleading summary of the discussions here, which ignores multiple detailed and evidence-backed arguments laying out, e.g.:

   * why prior art ought to invalidate the Gaborit--Aguilar Melchor patent (NB: this argument has not yet been heard or adjudicated by the patent authority, so "failure to invalidate" is beside the point here);

   * why, regardless of its validity, that patent's claims do not apply to Kyber, SABER, or FrodoKEM, and

   * why the claim that another patent covers a commonly used compression method is similarly faulty, due to abundant prior art.

Many of these arguments have remained undisputed here for several months, and even more supporting evidence has been given recently. Details follow.
 
For comparison, https://ntruprime.cr.yp.to/faq.html says

   There are known patent threats against the "Product
   NTRU"/"Ring-LWE"/"LPR" lattice proposals: Kyber, SABER, and NTRU
   LPRime (ntrulpr). These proposals use a "noisy DH + reconciliation"
   structure that appears to be covered by U.S. patent 9094189 expiring
   2032, and a 2x ciphertext-compression mechanism that appears to be
   covered by U.S. patent 9246675 expiring 2033. There are also
   international patents, sometimes with different wording.

This has multiple errors in the analysis and even the basic facts about the proposals. It is highly misleading to repeat these faulty claims without even acknowledging, much less disputing, those arguments. (That such errors persist in this very first answer of the NTRU Prime FAQ, several months after first being pointed out, leads me not to credit any of the rest of it either.)

Let's consider the two patents and their claimed coverage in turn.

*** U.S. patent 9094189 and Kyber, SABER, NTRU LPRime.

There are at least two important questions:

   (1) whether the patent is valid in the first place, and
   (2) whether it covers any NIST PQC proposals.

On (1), there is a compelling argument [link] that the patent should be invalidated on account of prior art. After more than 5 months, that argument remains undisputed here.

(For reasons I don't understand, the argument has not yet been adjudicated or even heard by the patent authority, but it might be scheduled for the next hearing.)

On (2), as Vadim Lyubashevsky recalled earlier today, the patent cannot apply to Kyber, SABER, or FrodoKEM because their "noisy DH" structure is *non-commutative*, and nobody has shown how to fit such structure within the patent's claims (but note: NTRU LPRime is commutative). Dan's attempt to do so using the "doctrine of equivalents" fell apart because such a broad interpretation would also indisputably cover prior art like the ACPS'09 and LPS'09 cryptosystems, which the patent cannot be allowed to do. The details of the non-commutative argument are given in the second half of the same message above [link], and also remain undisputed here.

Or, instead of these technical arguments, we can just look at the patentee's own lawyer's admission that the patent does not cover the non-commutative case. Vadim: "this fact has even been affirmed in court by the CNRS lawyers during that same [hearing] (see 3.21 on page 7 of [link]). From reading the full proceedings, it seems that they had to do this in order to rescue their main claim which is simply incorrect without assuming commutativity."

This certainly ought to put a final nail in "9094189 covers Kyber and SABER" 's coffin. RIP.

*** U.S. patent 9246675 and ciphertext compression.

On the facts: it is flatly untrue that any of Kyber, SABER, or NTRU LPRime uses "a 2x ciphertext-compression mechanism," as asserted above.

It's true that they do perform some compression, namely, "rounding away" a few low bits of certain mod-q integers (after adding in the scaled message/key). However, this yields far less than 2x compression versus not rounding. Moreover, they could not achieve anything close to 2x compression merely by rounding away more bits (without breaking correctness of decryption).

By contrast, the cited patent does describe a different method that achieves close to 2x compression. Falsely including the false "2x" in the FAQ answer looks like a FUD-y attempt to conflate the patent's method with what the NIST proposals do---which was a well established technique prior to the patent, as we will see next.

On the analysis: it's implausible that the proposals' round-low-bits compression could be covered by the cited patent, for either of two reasons:

   (1) the technique was already well established by abundant prior art to the 2012 patent application;

   (2) the patent does not claim or even describe the method, probably because it needs to avoid that very prior art.

Here is a (likely incomplete) list of prior art that uses low-bits rounding for compression (among other reasons) in KEMs, encryption, and other lattice-based cryptography:


   -- Section 4.2: "When using a large value of q... the efficiency of the prior schemes is suboptimal... Fortunately, it is possible to improve their efficiency (without sacrificing correctness) by discretizing the LWE distribution more ‘coarsely’ using a relatively small modulus q'..." The subsequent definition of KEM.Encaps uses rounding to compress the ciphertext, just as described in the text.

The above text mentions only LWE, because it predates the publication of the more "structured" and efficient Ring-LWE and Module-LWE (cf. Kyber, SABER, NTRU LPRime). But upon their introduction in 2010 and 2011, a skilled person would easily recognize that rounding works just as well for those too. Indeed, the following prior art does just that, often explicitly for compression purposes:


   -- "Our derandomization technique for LWE is very simple: instead of adding a small random error term to each inner product, we just deterministically round it to the nearest element of a sufficiently 'coarse' public subset of p << q well-separated values..." and

   -- "In the ring setting, the derandomization technique and hardness proof based on ring-LWE all go through without difficulty as well" and

   -- "We believe that this technique should be useful in many other settings" and "LWE: deterministic errors also gives more practical PRGs, GGM-type PRFs, encryption, ..."


   -- "Our dimension-modulus reduction idea enables us to take a ciphertext with parameters (n, log q) as above, and convert it into a ciphertext of the same message, but with parameters (k, log p) which are much smaller than (n, log q)..." and

   -- "the underlying intuition is that Z_p can 'approximate' Z_q by simple scaling, up to a small error..." and

   -- "As a nice byproduct of this technique, the ciphertexts of the resulting fully homomorphic scheme become very short!"

* 2011 https://eprint.iacr.org/2011/277 : more on modulus reduction (a.k.a. rounding), this time explicitly for Ring- and Module-LWE:

   -- "This is an abstract scheme that can be instantiated with either LWE or Ring LWE..." and

   -- "The transformation from c to c' involves simply scaling by (p/q) and rounding..." and

   -- "while they use modulus switching in 'one shot' to obtain a small ciphertext (to which they then apply Gentry’s bootstrapping procedure), we will use it (iteratively, gradually) to keep the noise level essentially constant, while stingily sacrificing modulus size..." and

   -- defining what's now known as Module-LWE: "LWE is simply GLWE instantiated with d = 1. RLWE is GLWE instantiated with n = 1. Interestingly, as far as we know, instances of GLWE between these extremes have not been explored. One would suspect that GLWE is hard for any (n, d) such that ... "

Given all this, it's implausible that the patent could successfully be asserted against any NIST proposal's compression method.

Sincerely yours in cryptography,
Chris

D. J. Bernstein

unread,
May 21, 2021, 5:03:16 AM5/21/21
to pqc-...@list.nist.gov
Vadim Lyubashevsky writes:
> For lattice crypto (and probably isogenies too), however, I can
> confidently say that the currently-active patents made zero scientific
> contributions to any of the finalist/alternate schemes.

This sounds right regarding 2010 Gaborit--Aguilar Melchor, but I don't
see this can be justified for 2012 Ding.

2010 Gaborit--Aguilar Melchor patented compact noisy-DH encryption using
reconciliation. This is generally known as the LPR cryptosystem, and as
far as I can tell it's scientifically proper to credit this entirely to
LPR---_not_ to the Eurocrypt 2010 version of the LPR paper (which
featured a bigger, slower cryptosystem), but rather to the 2012 revision
of the paper, or alternatively the May 2010 slides accompanying the talk
on the paper, or alternatively April 2010 slides from the same source.

Gaborit and Aguilar Melchor would have been entitled to credit if they
had published immediately in February 2010. However, they stayed quiet
until long after the cryptosystem was well known independently. This
makes it scientifically improper to credit them, the same way that it's
scientifically improper to credit Cocks at GCHQ for coming up with RSA.

(This doesn't stop the Gaborit--Aguilar Melchor patent from being valid
under patent law. The patent application was filed in February 2010.
It's irrelevant to patent law whether the patent holders made any effort
to bring their work to the attention of the community. Applications are
automatically "published" within 18 months, and defendants who didn't
notice such "publications" won't avoid liability, although they can
sometimes avoid triple damages.)

For 2012 Ding, the situation is different:

* 2012 Ding patented _and published_ compact noisy-DH encryption
using compressed reconciliation, saving a factor ~2 in ciphertext
size compared to LPR.

* Saving a factor ~2 in ciphertext size compared to LPR is critical
for Kyber, NTRU LPRime, SABER, and other modern versions of LPR.
Without compressing ciphertexts, these would not be competitive in
sizes with Quotient NTRU: the NTRU submission, Streamlined NTRU
Prime, etc.

Scientifically, how can one justify denying 2012 Ding credit for
compressing LPR ciphertexts?

Any effort to claim that compressing LPR ciphertexts was already obvious
runs into a big problem, namely 2014 Peikert, which claims as its "main
technical innovation" a "low-bandwidth" reconciliation technique that
"reduces the ciphertext length" of previous "already compact" schemes
"nearly twofold".

In fact, 2012 Ding had already used low-bandwidth reconciliation to
reduce the ciphertext length of previous already compact schemes nearly
twofold. It's simply not true that 2014 Peikert was ~2x smaller than
2012 Ding. If ~2x size reduction compared to LPR was already obvious at
the time of 2012 Ding, then why did 2014 Peikert highlight it as a new
accomplishment? (This question is shared between the scientific-credit
analysis and the patent analysis.)

Nowadays everyone understands how trivial it is to chop (say) 1024
reconciliation coefficients down to 256 coefficients, reducing the total
ciphertext from 1024+1024 coefficients down to 1024+256; and how trivial
it is to save more space by rounding each of the 256 coefficients. But
if these ideas were obvious in 2010 then why didn't original LPR do
them? If it's okay for 2014 Peikert to claim that it was an "innovation"
in 2014 to squeeze LPR's 1024+1024 coefficients down to only slightly
more space than 1024, how can 2012 Ding possibly deserve zero credit for
having two years earlier squeezed LPR's 1024+1024 coefficients down to
only slightly more space than 1024?

2014 Peikert objects that 2012 Ding's output is biased. However:

(1) This bias isn't a problem for building a KEM.

(2) This in any case can't justify claiming that 2012 Ding "made zero
scientific contributions to any of the finalist/alternate
schemes".

_If_ there's some serious argument that the bias matters, then a proper
credit statement would be "2012 Ding compressed LPR ciphertexts, but
added a bias, which was then removed by 2014 Peikert"---not omitting
2012 Ding from the history.

The message I'm replying to claims that people now compressing "public
key encryption" schemes are actually doing something different from 2012
Ding and 2014 Peikert compressing "reconciliation". However:

(1) I already went super-carefully through this claim in unanswered
email dated 1 Jan 2021 13:19:26 +0100. As far as I can tell, the
distinction is mathematically and cryptographically untenable.

(2) Like the bias argument, this in any case can't justify claiming
that 2012 Ding "made zero scientific contributions to any of the
finalist/alternate schemes".

_If_ there's some way to rescue a well-defined distinction between LPR
variant 1 and LPR variant 2 (I'm skeptical) _and_ some serious argument
that this matters for the compression, then a proper credit statement
would be something like "2012 Ding compressed variant-1 LPR ciphertexts,
and then [...] compressed variant-2 LPR ciphertexts".

> While Dan likes to tout that a CNRS patent has withstood one round of
> a lawsuit that tried to kill it,

I don't "like to tout" this. If the community isn't being adequately
warned regarding risks---e.g., if there's a claim that the patent is
"likely" to be "invalidated", without a huge disclaimer saying that
Keltie already tried and completely failed in the first round---then
further warnings are obviously necessary.

> he leaves out the important fact that the patent does
> not apply to Kyber/Saber/Frodo because those schemes are *non-commutative*
> (whereas NewHope and NTRU-LPRime are).

I spelled out in email dated 13 Dec 2020 18:37:43 +0100 exactly how the
patent would apply to Kyber even in a fantasy world without the doctrine
of equivalents.

There was one message labeled as a "reply" to this, but that message
didn't follow the traditional email-handling practice of quoting and
replying to each point. Anyone who reviews the content can see that it
wasn't actually answering the message it was labeled as replying to, but
was instead restating earlier arguments that had already been answered.

> We had a long technical discussion about
> this on this board, and this fact has even been affirmed in court by the CNRS
> lawyers during that same trial (see 3.21 on page 7 of https://register.epo.org/
> application?documentId=E2T6283H7805DSU&number=EP11712927&lng=en&npl=false).

No, it hasn't.

First of all, procedurally, the questions arising in Keltie's efforts to
invalidate the patent are regarding whether the patented idea works, and
whether it's anticipated by prior art. These are _not_ the same as the
question of whether something infringes on the patent. The tribunal is
_not_ faced with the question of whether the patent covers Kyber and
SABER. (Formally, even Frodo isn't directly at issue, but it's much
closer to the pre-patent work than Kyber and SABER are, and I don't see
how the patent holders would try bringing a case against it.)

As for content, details are important, and the quote you point to
doesn't say what you claim it says. Most importantly, you ask whether
"schemes are *non-commutative*", whereas the quote is asking whether a
system is "basé sur des anneaux non-commutatifs". Here's an example to
illustrate why the words matter:

* In a case against Kyber, the plaintiff's lawyer points to the
(\Z/q)[x]/(x^256+1) in the Kyber specification, pulls out an expert
witness saying that this is a commutative ring, points to the
advertising of this ring as defining the "framework" for Kyber,
etc., and concludes that Kyber is _based_ on a commutative ring.

* The defendant says "but there are also matrices on top of this so
Kyber is non-commutative". Even if everyone agrees with some
definition concluding that the scheme _is_ non-commutative, how is
this supposed to contradict the statement that the scheme is _based
on_ a commutative ring?

In the same scenario, if there's then an argument about what exactly
"based on" means (which I doubt, since procedurally I don't expect 3.21
to carry such weight), the plaintiff's lawyer will say that what really
matters here is the efficiency coming from the commutative ring, and the
plaintiff's expert witnesses will pull out performance numbers
supporting this, and the court will accept this since performance
numbers are easy to understand.

Fundamentally, the reality is that Kyber etc. need polynomials for their
efficiency. The use of polynomials inside noisy DH wasn't published
before LPR---and the patent came two months before that. Before the
patent there were less efficient noisy-DH systems that didn't use
polynomials, and less efficient polynomial systems that didn't use noisy
DH; neither of these will stop the patent from covering every noisy-DH
system using polynomials.

I understand that you're trying to draw the dividing line differently,
saying that what matters isn't _using polynomials_ but rather _not using
matrices_, so someone who combines polynomials with matrices will cross
the line. But this line isn't forced by the patent wording (see my email
dated 13 Dec 2020 18:37:43 +0100), isn't forced by the prior art, and
doesn't make sense from an efficiency perspective. As I wrote before:
"It's normal in patent cases for defendants to try to avoid a patented
efficiency improvement by interpolating between the prior art and the
efficiency improvement, and it's normal for the patentee to win."

> This is evidenced by the fact that in
> that very same patent he uses standard less-efficient public key
> encryption for another application.

I already spent some paragraphs commenting on this in my unanswered
email dated 1 Jan 2021 13:19:26 +0100.

> I of course agree that anything can happen in a court where
> non-experts get to decide on technical topics

Courts can definitely do surprising things, and leaving a buffer zone
around patents is good for this reason. There are also some rules, and a
serious risk analysis is not simply "anything can happen".

> but my point is that
> these patents don't seem any more scientifically relevant than the
> dozens of "running this KEM on a faster computer will make it faster"
> type patents that are out there and will continue to be pumped out and
> could put every scheme in some sort of danger.  

I agree that there are many mindless new patents that could pose
dangers. If as a community we don't make serious efforts to track and
avoid these dangers, and big companies end up being sued and having to
do _another_ round of painful multi-year crypto upgrades, then the users
will justifiably blame us for being so reckless.

I also agree that, scientifically, the mindless patents don't deserve
credit (because they're mindless, which _can_ stop them in court), and
the 2010 patent doesn't deserve credit (for different reasons covered
above, which definitely don't stop it in court). The 2012 patent is a
different matter.

> I would also like to see NIST and the community actively discourage
> IP trolling by emphatically, and publicly, refusing to pay anything
> unless it's just some small token amount

I'm puzzled by the description of the 2012 Ding patent, whatever Ding
asked Google to pay, etc. as "IP trolling".

Moti Yung has patents. I've heard him claiming that it's important to
have patents so that in the end he receives proper credit, and I've
heard Ding claiming the same regarding his own patents. To me this is
the intellectual equivalent of hearing a landowner claiming that it's
important to plant landmines to deter trespassers---but are Yung and
Ding wrong regarding the facts?

If our community were taking action against people who violate basic
ethics rules regarding credit, rather than constantly making excuses for
those people, then perhaps we'd have fewer patents to worry about---or
at least we'd be taking away the credit excuse and making clear that
it's all about the money.

More to the point, can you imagine facing future cryptographic users and
saying "Yeah, we realized that patents could do even more damage to
post-quantum crypto than Certicom did to ECC, and we had a chance to use
some taxpayer money to wipe out some patents, but we decided not to
because we don't want to feed the trolls"?

My primary concern here is with the possibility that NIST _won't_
succeed in buying out the most important patents. After years of NIST
projecting a stop-worrying-we're-on-top-of-this attitude, it's worrisome
to hear NIST saying that it's in negotiations regarding just _one_ of
the known patent families.

> Given the importance that many seem to place on IPR-free standards, I hope that
> NIST arranges to have a very open public discussion on this matter -- perhaps
> even disclosing who is asking for what -- at the upcoming workshop or at some
> other venue.

That would be a nice first step, but I would hope for much more
transparency from NIST, say weekly progress reports.

---Dan
signature.asc

Vadim Lyubashevsky

unread,
May 21, 2021, 7:00:26 AM5/21/21
to pqc-forum
Hi Dan, all,

We've been on this merry-go-round for quite a while. You say that reconciliation = public key encryption with chopping, and I say that they are very different things.  I say this because it's not immediately obvious to me (I know that this is not a very objective statement), and because one directly gives a KEM and the other a PKE, and because the maximum you can compress is different for both, and because Ding did not make this connection in the patent where he also used public key encryption.  The latter two reasons kind of confirm that maybe it really isn't obvious ... and not just to me.  So when I said that scientifically, none of the current schemes use Ding's patent, I meant that none of them use reconciliation.  The two approaches for compression are different and each one is a scientific contribution independently of the other.  But anyway, we've been here before ... Let me just reply to some new things.
 
> While Dan likes to tout that a CNRS patent has withstood one round of
> a lawsuit that tried to kill it,

I don't "like to tout" this. If the community isn't being adequately
warned regarding risks---e.g., if there's a claim that the patent is
"likely" to be "invalidated", without a huge disclaimer saying that
Keltie already tried and completely failed in the first round---then
further warnings are obviously necessary.

My problem is not that you're informing. It's that you're applying your usual anti-lattice crypto spin on things, and the lawyers who might be reading this (and who obviously cannot judge these facts for themselves) might start thinking that they have a much stronger case than in reality. And if NIST is really negotiating with them, it could make their job harder because the other side, sitting on a barely-relevant piece of IP, thinks it's sitting on a pot of gold. 
 
As for content, details are important, and the quote you point to
doesn't say what you claim it says. Most importantly, you ask whether
"schemes are *non-commutative*", whereas the quote is asking whether a
system is "basé sur des anneaux non-commutatifs". Here's an example to
illustrate why the words matter:

   * In a case against Kyber, the plaintiff's lawyer points to the
     (\Z/q)[x]/(x^256+1) in the Kyber specification, pulls out an expert
     witness saying that this is a commutative ring, points to the
     advertising of this ring as defining the "framework" for Kyber,
     etc., and concludes that Kyber is _based_ on a commutative ring.

   * The defendant says "but there are also matrices on top of this so
     Kyber is non-commutative". Even if everyone agrees with some
     definition concluding that the scheme _is_ non-commutative, how is
     this supposed to contradict the statement that the scheme is _based
     on_ a commutative ring?

From the previous statements in that discussion, it's quite clear what they were talking about.  And the first interpretation is 
absurd -- everything is then based on an operation over a commutative ring because you can always dig into the mathematical structure until all your operations are NANDs over {0,1}.    
 
> I would also like to see NIST and the community actively discourage
> IP trolling by emphatically, and publicly, refusing to pay anything
> unless it's just some small token amount

I'm puzzled by the description of the 2012 Ding patent, whatever Ding
asked Google to pay, etc. as "IP trolling".

Trolling doesn't necessarily mean that the original patent was written with malicious intent.  For the record, I actually found the 2010 and 2012 patents to be quite clear and precise (or at least, no less clear or precise than some scientific papers) in what they were claiming (and even if the 2010 patent was not novel due to prior art, it doesn't mean that it was written with malicious intent). The trolling comes from trying to later extend the applicability of these patents to something that they were not claiming or that is covered by prior art. 
 
If our community were taking action against people who violate basic
ethics rules regarding credit, rather than constantly making excuses for
those people, then perhaps we'd have fewer patents to worry about---or
at least we'd be taking away the credit excuse and making clear that
it's all about the money.

I think that it's pretty clear that it's always about the money.  But I am not judging anyone for wanting money, if they actually did something productive. 
 
More to the point, can you imagine facing future cryptographic users and
saying "Yeah, we realized that patents could do even more damage to
post-quantum crypto than Certicom did to ECC, and we had a chance to use
some taxpayer money to wipe out some patents, but we decided not to
because we don't want to feed the trolls"?

If there were an easy answer here, then everyone would just do that. I stated before that I personally don't see anything wrong with paying some token amount (low 6 figures). One could even argue that paying small licensing fees to small / medium-sized organizations / research institutions that actually produce something helps keep them going, which is a good thing for the industry (we don't want to only have giant corporations in the ecosystem).  But paying millions for tangential patents just to be on the safe side would just encourage true trolls to appear that could do even more damage later on.  

And anyway, in about 10 years, most of these patents will be expired, and if we don't feed the trolls now, not many new ones should pop up.  So we should not be desperate -- cryptographic agility is something that will have to be built into many current systems anyway because NIST said that it's unsure about whether codes/lattice/isogenies are all secure (and I think that most reasonable people agree here, especially about the quantum-secure part) . So we can also leverage this cryptographic agility for switching between schemes if IP issues arise.  The more I think about it, the more I like Markku's suggestion of standardizing many different schemes/techniques and having some "IP backups" in addition to "security backups". 

That would be a nice first step, but I would hope for much more
transparency from NIST, say weekly progress reports.

Weekly seems a bit much, but definitely some lifting of the veil is necessary. 

Best,
Vadim
 

---Dan

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Christopher J Peikert

unread,
May 21, 2021, 9:30:54 AM5/21/21
to pqc-forum
(Addressing some things with cross-links to prior information and arguments, along with a couple of new technical points, then stepping off the merry-go-round.)

On Fri, May 21, 2021 at 5:03 AM D. J. Bernstein <d...@cr.yp.to> wrote:
Vadim Lyubashevsky writes:
> For lattice crypto (and probably isogenies too), however, I can
> confidently say that the currently-active patents made zero scientific
> contributions to any of the finalist/alternate schemes.

This sounds right regarding 2010 Gaborit--Aguilar Melchor, but I don't
see this can be justified for 2012 Ding.

Vadim is right; it's also the case for 2012 Ding. See my recent message [link], specifically the "U.S. patent 9246675 and ciphertext compression" section. It shows that the compression technique used by the NIST finalists/alternates ("rounding some low bits after adding the message/key") was very well established in the prior art, and is different from the method in 2012 Ding.
 
2010 Gaborit--Aguilar Melchor patented compact noisy-DH encryption using
reconciliation.

No, they patented (or have attempted to patent, at least) the patent's specific claims and limitations.
 
For 2012 Ding, the situation is different:

   * 2012 Ding patented _and published_ compact noisy-DH encryption
     using compressed reconciliation, saving a factor ~2 in ciphertext
     size compared to LPR.

No, Ding patented the patent's specific claims and limitations.
 
   * Saving a factor ~2 in ciphertext size compared to LPR is critical
     for Kyber, NTRU LPRime, SABER, and other modern versions of LPR.

No, it's not "critical," because those systems don't come close to saving a factor of ~2 in ciphertext size (versus not compressing), and they're doing just fine. This is also explained in the same message linked above [link].

I previously pointed out this basic factual error about "2x" on 11 Dec 2020 [link], yet you have ignored the correction and persist in repeating the error. (My guess is that this is because it's central to sowing FUD-y confusion, by conflating the prior art and NIST candidates with 2012 Ding.)
 
     Without compressing ciphertexts, these would not be competitive in
     sizes with Quotient NTRU: the NTRU submission, Streamlined NTRU
     Prime, etc.

Scientifically, how can one justify denying 2012 Ding credit for
compressing LPR ciphertexts?

One successfully does so by pointing to the abundant prior research on compressing (Ring-/Module-)LWE ciphertexts/samples, as I did in the above-linked message. Such compression was a widely used technique across multiple parts of lattice-based cryptography.
 
Any effort to claim that compressing LPR ciphertexts was already obvious
runs into a big problem, namely 2014 Peikert, which claims as its "main
technical innovation" a "low-bandwidth" reconciliation technique that
"reduces the ciphertext length" of previous "already compact" schemes
"nearly twofold".

It _was_ obvious, thanks to all the research using low-bits rounding for compression. 2014 Peikert is irrelevant to this discussion, because the 2x reconciliation techniques work differently, and the NIST proposals don't use them.

I previously made this point (irrelevance of 2014 Peikert to the NIST candidates) in my 11 Dec 2020 message [link].
 
Nowadays everyone understands how trivial it is to chop

People who were working in (or at least following) the area understood this in 2011, as the cited papers of the time show.
 
The message I'm replying to claims that people now compressing "public
key encryption" schemes are actually doing something different from 2012
Ding and 2014 Peikert compressing "reconciliation". However:

   (1) I already went super-carefully through this claim in unanswered
       email dated 1 Jan 2021 13:19:26 +0100. As far as I can tell, the
       distinction is mathematically and cryptographically untenable.

Please now consider both my message from yesterday and this one a response to that email. For the record I'll reply to it with appropriate links soon.

The distinction you haven't seen is easy to see based on the contents of that email. It is the difference between:

  * "Example 2," which is how Kyber/SABER and the abundant prior art work, with M having no dependence on A, b, or c, and compression to a single bit not being possible, and

  * "Example 4," which is what 2012 Ding does, with M depending on Ab, and allowing compression to a single bit.

You claim that Example 4 is a "special case" of Example 2 (via Example 3), which is an argument that 2012 Ding covers (at least part of) the cited prior art. I'm not taking a position on that argument, but it's not an argument that any of the finalists/alternates infringe on 2012 Ding, nor owe it any scientific credit. If anything, it's an argument that 2012 Ding must be limited to the improved compression obtained by Example 4 (which, again, the NIST proposals don't achieve or use).

   (2) Like the bias argument, this in any case can't justify claiming
       that 2012 Ding "made zero scientific contributions to any of the
       finalist/alternate schemes".

Yes it can, because the compression technique used by finalist/alternate schemes was well known and widely used before 2012 Ding.
 
_If_ there's some way to rescue a well-defined distinction between LPR
variant 1 and LPR variant 2 (I'm skeptical) _and_ some serious argument
that this matters for the compression, then a proper credit statement
would be something like "2012 Ding compressed variant-1 LPR ciphertexts,
and then [...] compressed variant-2 LPR ciphertexts".

That would be a weird credit statement for the finalist/alternate schemes, because [...] would be filled with several papers from 2009-2011, and the "variant-1" clause would be irrelevant to the schemes.
 
> While Dan likes to tout that a CNRS patent has withstood one round of
> a lawsuit that tried to kill it,

I don't "like to tout" this. If the community isn't being adequately
warned regarding risks---e.g., if there's a claim that the patent is
"likely" to be "invalidated", without a huge disclaimer saying that
Keltie already tried and completely failed in the first round---then
further warnings are obviously necessary.

This "tried and completely failed" point is addressed in the first part of my recent message [link]. The particular argument that the patent covers prior art has not been heard or adjudicated yet (for reasons I don't understand).
 
> he leaves out the important fact that the patent does
> not apply to Kyber/Saber/Frodo because those schemes are *non-commutative*
> (whereas NewHope and NTRU-LPRime are).

I spelled out in email dated 13 Dec 2020 18:37:43 +0100 exactly how the
patent would apply to Kyber even in a fantasy world without the doctrine
of equivalents.

There was one message labeled as a "reply" to this, but that message
didn't follow the traditional email-handling practice of quoting and
replying to each point.

LOL. But in fact, my reply did quote and reply to the non-commutativity point that is in dispute; see here: [link]
 
Anyone who reviews the content can see that it
wasn't actually answering the message it was labeled as replying to, but
was instead restating earlier arguments that had already been answered.

My reply [link] shows that your argument could not be accepted, because letting the patent be so broad as to cover the non-commutative case would make it cover at least two prior non-commutative systems (ACPS'09 and LPS'09), which it cannot be allowed to do.

In that thread I don't see any substantive response to this point---at least not one that follows the traditional email-handling practice of quoting and replying to it.
 
The tribunal is
_not_ faced with the question of whether the patent covers Kyber and
SABER. (Formally, even Frodo isn't directly at issue, but it's much
closer to the pre-patent work than Kyber and SABER are, and I don't see
how the patent holders would try bringing a case against it.)

Really? Why not? Your argument that the patent covers the non-commutative case would apply just as well to FrodoKEM. The words "efficient" / "efficiency" do not appear anywhere in the patent, so FrodoKEM's relatively lesser efficiency would not put it out of bounds of your argument. (Plus, it is efficient enough to use in many applications.) More to the point, the prior art of the similarly less-efficient ACPS'09 and LPS'09 would not escape either. This is why your argument can't be accepted.

Now, my pqc-form time budget has now officially been exceeded. I hope this was informative.
Reply all
Reply to author
Forward
0 new messages