IPR issues have been an explicit evaluation criteria since the beginning of the PQC standardization process. Our efforts in this area are ongoing; we are aware of the issues. It is certainly a complex situation. Â
We would appreciate any feedback from the community on this issue, including the size of role that patents/IPR issues should play in our final selection process.
Thanks,
Dustin Moody
NIST
IPR issues have been an explicit evaluation criteria since the beginning of the PQC standardization process. Our efforts in this area are ongoing; we are aware of the issues. It is certainly a complex situation. Â
Â
We would appreciate any feedback from the community on this issue, including the size of role that patents/IPR issues should play in our final selection process.
Â
IMHO, patent/IPR issues should play critical role – being a show-stopper/deal-breaker.
Â
Thanks!
--
Regards,
Uri
Â
There are two ways to design a system. One is to make is so simple there are obviously no deficiencies.
The other is to make it so complex there are no obvious deficiencies.
                                                                                                                                   - C. A. R. Hoare
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/SA1PR09MB866962416538E109C0EAB5E2E52B9%40SA1PR09MB8669.namprd09.prod.outlook.com.
> We would appreciate any feedback from the community on this issue, including the size of role that patents/IPR issues should play in our final selection process.
+1 on prioritizing IPR free algorithms. We too have been trying to away from algorithms with IPR claims as much as we can. Algorithms integrated in standards would rather be open. NIST has followed that rationale and has had great success in the past. As already mentioned by others, on the other hand certain companies’ IPR claims have hindered very efficient primitives from being deployed and adopted for years.
Â
And I would suggest submitters to consider opening up their potential IPR. Imo, the bragging rights of having a standardized algorithm used for years is greater than the benefit of having an algorithm which is not used or standardized but theoretically could generate lots of revenue 😉
Â
Rgs,
Panos
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/SA1PR09MB866962416538E109C0EAB5E2E52B9%40SA1PR09MB8669.namprd09.prod.outlook.com.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/BN7PR11MB254713702F287CFBD5465084C92B9%40BN7PR11MB2547.namprd11.prod.outlook.com.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAE158z_2GwaWCKmqgoKWTiMJqsM%2BSfmQKePMnu5QOcR0CsSvWA%40mail.gmail.com.
The BIKE team understands that the aforementioned CNRS patent covered only BIKE-3, a specific variant which was abandoned by the team long ago.Currently, the "BIKE proposal" as considered in the 3rd round of the NIST competition corresponds to BIKE-2, and is not affected by the aforementioned patent since it is an NTRU-like quotient approach.
NIST finally answered on 2021.05.04 (very far past the FOIA deadlines),
refusing to provide 3 documents totaling 31 pages (perhaps "secret law"
in violation of FOIA---to be determined) but providing _some_ dribbles
of interesting information.
 * NIST posted the IP statements three years ago. If there had been
   coordinated public effort starting at that point to analyze the
   scope of the known threat then I think the most important parts of
   the analysis would have been settled by now.
   However---even though the call for proposals had described free
   usability as "critical"---NIST started trying, with considerable
   success, to delay and deter public analysis of the patent threats.
   I don't understand why.
To review some of the relevant background: ...
   Subsequent public discussions of patents have been full of errors:
   e.g., pushing two definitions of "reconciliation" that turn out to
   contradict each other; claiming that the patent by Gaborit and
   Aguilar Melchor was "likely" to be "invalidated" while ignoring
   Keltie's failure to invalidate that patent; and making exactly the
   mistake that the Supreme Court had specifically rejected in _Festo
   v. Shoketsu_, 535 U.S. 722 (2002). The community is obviously very
   far from consensus on the scope of the threat.
For comparison, https://ntruprime.cr.yp.to/faq.html says
  There are known patent threats against the "Product
  NTRU"/"Ring-LWE"/"LPR" lattice proposals: Kyber, SABER, and NTRU
  LPRime (ntrulpr). These proposals use a "noisy DH + reconciliation"
  structure that appears to be covered by U.S. patent 9094189 expiring
  2032, and a 2x ciphertext-compression mechanism that appears to be
  covered by U.S. patent 9246675 expiring 2033. There are also
  international patents, sometimes with different wording.
> While Dan likes to tout that a CNRS patent has withstood one round of
> a lawsuit that tried to kill it,
I don't "like to tout" this. If the community isn't being adequately
warned regarding risks---e.g., if there's a claim that the patent is
"likely" to be "invalidated", without a huge disclaimer saying that
Keltie already tried and completely failed in the first round---then
further warnings are obviously necessary.
As for content, details are important, and the quote you point to
doesn't say what you claim it says. Most importantly, you ask whether
"schemes are *non-commutative*", whereas the quote is asking whether a
system is "basé sur des anneaux non-commutatifs". Here's an example to
illustrate why the words matter:
  * In a case against Kyber, the plaintiff's lawyer points to the
   (\Z/q)[x]/(x^256+1) in the Kyber specification, pulls out an expert
   witness saying that this is a commutative ring, points to the
   advertising of this ring as defining the "framework" for Kyber,
   etc., and concludes that Kyber is _based_ on a commutative ring.
  * The defendant says "but there are also matrices on top of this so
   Kyber is non-commutative". Even if everyone agrees with some
   definition concluding that the scheme _is_ non-commutative, how is
   this supposed to contradict the statement that the scheme is _based
   on_ a commutative ring?
> I would also like to see NIST and the community actively discourage
> IP trolling by emphatically, and publicly, refusing to pay anything
> unless it's just some small token amount
I'm puzzled by the description of the 2012 Ding patent, whatever Ding
asked Google to pay, etc. as "IP trolling".
If our community were taking action against people who violate basic
ethics rules regarding credit, rather than constantly making excuses for
those people, then perhaps we'd have fewer patents to worry about---or
at least we'd be taking away the credit excuse and making clear that
it's all about the money.
More to the point, can you imagine facing future cryptographic users and
saying "Yeah, we realized that patents could do even more damage to
post-quantum crypto than Certicom did to ECC, and we had a chance to use
some taxpayer money to wipe out some patents, but we decided not to
because we don't want to feed the trolls"?
That would be a nice first step, but I would hope for much more
transparency from NIST, say weekly progress reports.
---Dan
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20210521090259.508597.qmail%40cr.yp.to.
Vadim Lyubashevsky writes:
> For lattice crypto (and probably isogenies too), however, I can
> confidently say that the currently-active patents made zero scientific
> contributions to any of the finalist/alternate schemes.
This sounds right regarding 2010 Gaborit--Aguilar Melchor, but I don't
see this can be justified for 2012 Ding.
2010 Gaborit--Aguilar Melchor patented compact noisy-DH encryption using
reconciliation.
For 2012 Ding, the situation is different:
  * 2012 Ding patented _and published_ compact noisy-DH encryption
   using compressed reconciliation, saving a factor ~2 in ciphertext
   size compared to LPR.
  * Saving a factor ~2 in ciphertext size compared to LPR is critical
   for Kyber, NTRU LPRime, SABER, and other modern versions of LPR.
   Without compressing ciphertexts, these would not be competitive in
   sizes with Quotient NTRU: the NTRU submission, Streamlined NTRU
   Prime, etc.
Scientifically, how can one justify denying 2012 Ding credit for
compressing LPR ciphertexts?
Any effort to claim that compressing LPR ciphertexts was already obvious
runs into a big problem, namely 2014 Peikert, which claims as its "main
technical innovation" a "low-bandwidth" reconciliation technique that
"reduces the ciphertext length" of previous "already compact" schemes
"nearly twofold".
Nowadays everyone understands how trivial it is to chop
The message I'm replying to claims that people now compressing "public
key encryption" schemes are actually doing something different from 2012
Ding and 2014 Peikert compressing "reconciliation". However:
  (1) I already went super-carefully through this claim in unanswered
    email dated 1 Jan 2021 13:19:26 +0100. As far as I can tell, the
    distinction is mathematically and cryptographically untenable.
  (2) Like the bias argument, this in any case can't justify claiming
    that 2012 Ding "made zero scientific contributions to any of the
    finalist/alternate schemes".
_If_ there's some way to rescue a well-defined distinction between LPR
variant 1 and LPR variant 2 (I'm skeptical) _and_ some serious argument
that this matters for the compression, then a proper credit statement
would be something like "2012 Ding compressed variant-1 LPR ciphertexts,
and then [...] compressed variant-2 LPR ciphertexts".
> While Dan likes to tout that a CNRS patent has withstood one round of
> a lawsuit that tried to kill it,
I don't "like to tout" this. If the community isn't being adequately
warned regarding risks---e.g., if there's a claim that the patent is
"likely" to be "invalidated", without a huge disclaimer saying that
Keltie already tried and completely failed in the first round---then
further warnings are obviously necessary.
> he leaves out the important fact that the patent does
> not apply to Kyber/Saber/Frodo because those schemes are *non-commutative*
> (whereas NewHope and NTRU-LPRime are).
I spelled out in email dated 13 Dec 2020 18:37:43 +0100 exactly how the
patent would apply to Kyber even in a fantasy world without the doctrine
of equivalents.
There was one message labeled as a "reply" to this, but that message
didn't follow the traditional email-handling practice of quoting and
replying to each point.
Anyone who reviews the content can see that it
wasn't actually answering the message it was labeled as replying to, but
was instead restating earlier arguments that had already been answered.
The tribunal is
_not_ faced with the question of whether the patent covers Kyber and
SABER. (Formally, even Frodo isn't directly at issue, but it's much
closer to the pre-patent work than Kyber and SABER are, and I don't see
how the patent holders would try bringing a case against it.)