See https://ntruprime.cr.yp.to/latticerisks-20211031.pdf for an extensive
analysis of the state of cryptanalysis and proofs for lattice-based
systems and section 5.3 in particular regarding rounding.
Summary:
- Multiple remaining NISTPQC lattice KEM proposals rely entirely on "small (deterministic) rounding," rather than random errors, for their security.
- This approach is relatively new -- especially the "small" aspect -- but it has received much less public attention compared to other aspects of lattice KEM designs.
- Known proofs don't come close to applying to the KEMs' small rounding. Moreover, other proof techniques that give important insights about random errors don't apply to (small) rounding. That leaves cryptanalysis as our only real basis for confidence.
- So, I ask: what (quantum) cryptanalytic effort has been devoted specifically to small rounding? Does it give enough confidence to use it as the foundation of a long-term PQC standard?
- From what I can tell, there has been little specific effort -- the existing analyses essentially "assume away" any potential difference between deterministic rounding errors and random ones. Maybe this is a valid heuristic, but how well has it been tested? Are there other analyses that don't rely on this heuristic?
- Perhaps people have made serious attempts at other types of attacks that weren't shared publicly. It would be good to know what has (and hasn't) been tried -- even/especially if it didn't amount to anything.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CACOo0QiZuYZVvbNmM2dV-t2L1Pdz2OhKtq4Qc_LJMrgQnxY2BQ%40mail.gmail.com.
You can obtain bounds on the hardness of the search version of small rounding over rings using the Renyi divergence. Rounded samples are close (via Renyi divergence) to rounded samples with noise (https://eprint.iacr.org/2015/769.pdf).
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20220110045705.1045179.qmail%40cr.yp.to.
To be clear, I'm not endorsing the above argument; I'm using it as a
concise way to illustrate a fatal flaw in the original posting in this
thread.
NTRU uses small noise in full-dimension rings, and has been the subject
of decades of cryptanalysis.
The problem of breaking small noise in
modules of rank 3 over rings of only 1/3 dimension, as in Kyber-768, is
much newer. Is there enough cryptanalysis of rank-3 noise?
Theorems that might give a "modules are guaranteed safe" impression have
limitations rendering them logically inapplicable to NISTPQC. See, e.g.,
Section 5.5 of https://ntruprime.cr.yp.to/latticerisks-20211031.pdf.
Ergo, cryptosystems relying on noise in modules of rank 3 shouldn't be
standardized.
P.S. https://ntruprime.cr.yp.to/latticerisks-20211031.pdf already covers
the possibility of rounding being broken while randomly generated noise
survives, _and vice versa_. Section 5.3 of the paper observes that known
proofs don't rule out either possibility. (Of course, smallness plays a
role in the details.) The paper systematically surveys known attack
avenues against lattices (see especially Table 1.1 and Section 3), and
Section 3.5 mentions that none of these avenues would lead to rounding
being broken while noise survives or vice versa.
Regarding empirical testing of tractable parameters for LWE instances
with secret/error coefficients in {-1,0,1}, in [1] we did run some
experiments.
> > The problem of breaking small noise in
> > modules of rank 3 over rings of only 1/3 dimension, as in Kyber-768, is
> > much newer. Is there enough cryptanalysis of rank-3 noise?
> To answer the question directly: quite possibly yes, there is enough. In
> any case: there is a good deal of cryptanalysis
Please identify the citations you're alluding to here.
> Almost all the cryptanalytic and theoretical effort over decades has been
> devoted to random errors, with apparently almost none to small rounding.
Please explain what you mean by "devoted".
Are you claiming that LLL
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CACOo0QimDcaZMT%2Bbg-OJGdq%2BDCMd9PxRWjQLy0OngcJwh3o%3DSA%40mail.gmail.com.
Christopher J Peikert writes:
> Here's a sampling of papers that cryptanalyze modules over rings, like
> rank-2 NTRU lattices.
The question was about cryptanalysis of "small noise in modules of rank
3 over rings of only 1/3 dimension, as in Kyber-768". You're switching
to a broader topic, and not making clear for readers that the papers
you've cited don't answer the original question.
> https://www.ntru.org/f/hps98.pdf
Why is pointing to cryptanalysis of NTRU as supporting the security of
Kyber-768 supposed to be more defensible than pointing to cryptanalysis
of NTRU as supporting the security of rounding?
Why exactly is it supposed to be okay to count these experiments towards
"enough cryptanalysis" of systems that move to rank-3 modules over a
ring of only 1/3 dimension (e.g., kyber768), when it's not okay to count
these towards "enough cryptanalysis" of systems that move from randomly
generated noise to rounding (e.g., sntrup761 or ntrulpr761)?
If proofs are supposed to be important, why are
proof limitations being selectively advertised in just one direction?
Shouldn't we be worrying about _all_ of the proof limitations?
The beginning of this thread clearly asked "what (quantum) cryptanalytic
effort has been devoted specifically to small rounding?"...
Proofs _also_ don't rule out attacks specific to the submissions using
rank-3 modules over rings of 1/3 dimension.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CACOo0Qg%2BZq9Qhix6aneOE9z57Z0Lw20eZL9YsgN63iNrs0j8MQ%40mail.gmail.com.
To address the explicit request of Chris for sanity checks on small rounding vs random error, we can report that we did run such experiments. In particular, we tested the success probability of running BKZ with varying block sizes to recover the secret key of rank 2 module-LWE with random noise vs with rounded noise on small instances (which gives a high enough probability for BKZ to succeed).
The conclusion of our tests is rather unsurprising: we could not find any statistically relevant difference between the success probabilities for both instances for all block sizes and number of samples.
Chris also states that ''Rounding generates "noise" in a narrow, structured way.'', but it is unclear what is meant with narrow? Does narrow refer to the resulting noise?
It would be interesting to hear people's opinions as to what is perceived as 'the main weakness' of rounding (if there is such a thing)
is it the fact that the noise is generated deterministically or is it more specifically that is obtained as a result of rounding? As a thought experiment: do people think it would be more secure to deterministically generate a binomial noise (with the same variance of course) where the coefficients of the lattice point A^t*s are used as seed (instead of rounding)?
Of course we are very much interested to hear any possible attack avenues that would be able to exploit rounding, but so far, there has not even been any suggestion in this direction.
If we're supposed to worry about KEMs using rounding, a feature that has
never created security problems
Anyone looking at the attack analyses sees more and more questions to
investigate---none of which point to any way to break rounding without
breaking randomly generated noise, or vice versa.
To be clear, the bigger question here is regarding the standards being
applied in this thread regarding risk management and the allocation of
valuable cryptanalytic time.
Proofs aren't helpful
here: they don't rule out the possibility of rounding being weaker, and
they also don't rule out the possibility of random noise being weaker.
If we're supposed to worry about KEMs using rounding, a feature that has
never created security problems, then shouldn't we be much more worried
about Kyber using low noise without even analyzing _known_ attacks that
exploit low noise? Can we have enough confidence to rely on Kyber's new
error distribution for long-term post-quantum security?
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CACOo0Qh_eb1vs8DkPKDkZj4XEi5HKJgkbVJ%3DNunfp7QvtkaVyA%40mail.gmail.com.
I would add that unless I misremember Kyber (or it got tweaked a bunch), this part makes no sense for a reason Chris did not mention.If we're supposed to worry about KEMs using rounding, a feature that has
never created security problems, then shouldn't we be much more worried
about Kyber using low noise without even analyzing _known_ attacks that
exploit low noise? Can we have enough confidence to rely on Kyber's new
error distribution for long-term post-quantum security?Kyber rounds first, then adds low random noise.
> It’s swapped: Kyber first adds random error, then rounds down somewhat. But
> the point basically remains the same: both kinds of error seem to
> contribute to hardness. For attacks, it’s not enough to consider only the
> random noise and not the additional rounding noise.
This is strictly speaking correct, but might still be misunderstood. Of
course for *attacks* both kinds of noise contribute to the hardness.
However, the security analysis of Kyber-768 (NIST level 3) and
Kyber-1024 (NIST level 5) is more conservative and relies entirely on
the random (LWE) noise. Only in the analysis of Kyber-512 we also
present numbers that additionally take into account the noise introduced
by rounding. However, we *also* present the number that is based solely
on random noise. The difference between these two is 6 bits of classical
security.
Thanks for running these sanity checks (is the code publicly available anywhere?). Indeed, this is the expected outcome -- BKZ wasn't designed to exploit rounding -- but it's good to have confirmation. I think this should be seen as a first step in the serious cryptanalysis of small rounding.
Summary:
- Multiple remaining NISTPQC lattice KEM proposals rely entirely on "small (deterministic) rounding," rather than random errors, for their security.
- This approach is relatively new -- especially the "small" aspect -- but it has received much less public attention compared to other aspects of lattice KEM designs.
- Known proofs don't come close to applying to the KEMs' small rounding. Moreover, other proof techniques that give important insights about random errors don't apply to (small) rounding. That leaves cryptanalysis as our only real basis for confidence.
- So, I ask: what (quantum) cryptanalytic effort has been devoted specifically to small rounding? Does it give enough confidence to use it as the foundation of a long-term PQC standard?
- From what I can tell, there has been little specific effort -- the existing analyses essentially "assume away" any potential difference between deterministic rounding errors and random ones. Maybe this is a valid heuristic, but how well has it been tested? Are there other analyses that don't rely on this heuristic?
- Perhaps people have made serious attempts at other types of attacks that weren't shared publicly. It would be good to know what has (and hasn't) been tried -- even/especially if it didn't amount to anything.
All lattice-based encryption/KEM schemes rely on adding some kind of "error" for security. Without such error, the schemes would be trivially breakable.
Ten years ago, coauthors and I proposed the idea of using deterministic "rounding error" instead of random error in lattice-based constructions. This yields the "with rounding" family of problems, like Learning With Rounding (over Rings/Modules) etc.
The main idea is simple: in order to add error to an integer, just round it to the nearest multiple of some fixed integer p. This can be seen as adding some "deterministic error" in the range [-p/2, p/2]. And since the result r is a multiple of p, we can instead work with r/p. So, this rounding effectively "drops" the least-significant log_2(p) bits of the input.
While we and others managed to prove some things about the hardness of "with rounding" problems, the proofs require much more rounding than what the NISTPQC KEMs do.
More specifically, two KEMs rely entirely on small rounding for their security:
- SABER (a finalist) drops 3-7 bits of each integer coefficient.
- NTRU Prime (an alternate) rounds to the nearest multiple of 3, effectively dropping just log_2(3) < 1.6 bits per coefficient.
(In addition, finalist Kyber uses random error, then also does some rounding for compression; both forms of error are accounted for in the security analysis.)
This is much less rounding than what was originally considered. For example, our work showed that dropping 100+b bits is at least as secure as using random error of about b bits (for a statistical security parameter of about 100).
Of course, 100+b is much more than 3 or 1.6, so this proof is far very from applying to the KEMs' small rounding. Later works somewhat reduced the "100" in certain contexts, but not to anything close to what the KEMs use.
Moreover, several other proof techniques that provide insights about random errors do not appear to translate to deterministic rounding, especially with rings/modules (as used by the KEMs). These include the safety of using a small "secret" drawn from the error distribution; sample-preserving search-to-decision reductions; and the quantum reduction that says "solving BDD with random errors implies finding short (dual) lattice vectors".
Since known proofs tell us very little about small rounding, that leaves cryptanalysis as our only basis for evaluating its security. What has been tried? How much confidence should we have?
The analyses I've seen just heuristically model rounding error as being "random-ish" in the rounding range, then proceed with the usual kinds of lattice analyses (Core-SVP and the like).
This heuristic is plausible, but it "assumes away" any potential distinction between random and deterministic error from the outset. In other words, if there is any significant difference in security between the two kinds of error, the heuristic completely conceals it.
Therefore, I would like to ask the following questions of the community:
- Is there any significant security gap between rounding and "same sized" random errors?
- How well has the heuristic been tested, experimentally and analytically, for known lattice attacks?
- Has anyone tried to develop other kinds of (quantum) attacks that specifically exploit small rounding -- including with small secrets, and over rings/modules? If so, and the attempts didn't work out, what were the obstructions? If not, how much confidence should we have in small rounding?
Sincerely yours in cryptography,Chris
Hi Chris,> Moreover, several other proof techniques that provide insights about random errors do not appear to translate to deterministic rounding, especially with rings/modules (as used by the KEMs). These include the safety of using a small "secret" drawn from the error distribution; sample-preserving search-to-decision reductions; and the quantum reduction that says "solving BDD with random errors implies finding short (dual) lattice vectors".What do you exactly mean by "solving BDD with random errors implies finding short (dual) lattice vectors" ?
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAMqF0n7Y7WhrtWFSUxUb5dSOnYTsfwL4emWp%2BL9CC_nRB712VA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAPxHsS%2Bt4SmjTrVcdSL7tcHHK4X6eki-Ej%3DqL%3Dimx11WNg%3DW%2Bg%40mail.gmail.com.