Is there enough cryptanalysis of "small rounding"?

2,352 views
Skip to first unread message

Christopher J Peikert

unread,
Dec 3, 2021, 2:44:39 PM12/3/21
to pqc-forum
Summary:
  • Multiple remaining NISTPQC lattice KEM proposals rely entirely on "small (deterministic) rounding," rather than random errors, for their security.
  • This approach is relatively new -- especially the "small" aspect -- but it has received much less public attention compared to other aspects of lattice KEM designs.
  • Known proofs don't come close to applying to the KEMs' small rounding. Moreover, other proof techniques that give important insights about random errors don't apply to (small) rounding. That leaves cryptanalysis as our only real basis for confidence.
  • So, I ask: what (quantum) cryptanalytic effort has been devoted specifically to small rounding? Does it give enough confidence to use it as the foundation of a long-term PQC standard?
    • From what I can tell, there has been little specific effort -- the existing analyses essentially "assume away" any potential difference between deterministic rounding errors and random ones. Maybe this is a valid heuristic, but how well has it been tested? Are there other analyses that don't rely on this heuristic?
    • Perhaps people have made serious attempts at other types of attacks that weren't shared publicly. It would be good to know what has (and hasn't) been tried -- even/especially if it didn't amount to anything.
All lattice-based encryption/KEM schemes rely on adding some kind of "error" for security. Without such error, the schemes would be trivially breakable.

Ten years ago, coauthors and I proposed the idea of using deterministic "rounding error" instead of random error in lattice-based constructions. This yields the "with rounding" family of problems, like Learning With Rounding (over Rings/Modules) etc.

The main idea is simple: in order to add error to an integer, just round it to the nearest multiple of some fixed integer p. This can be seen as adding some "deterministic error" in the range [-p/2, p/2]. And since the result r is a multiple of p, we can instead work with r/p. So, this rounding effectively "drops" the least-significant log_2(p) bits of the input.

While we and others managed to prove some things about the hardness of "with rounding" problems, the proofs require much more rounding than what the NISTPQC KEMs do.

More specifically, two KEMs rely entirely on small rounding for their security:
  1. SABER (a finalist) drops 3-7 bits of each integer coefficient.
  2. NTRU Prime (an alternate) rounds to the nearest multiple of 3, effectively dropping just log_2(3) < 1.6 bits per coefficient.
(In addition, finalist Kyber uses random error, then also does some rounding for compression; both forms of error are accounted for in the security analysis.)

This is much less rounding than what was originally considered. For example, our work showed that dropping 100+b bits is at least as secure as using random error of about b bits (for a statistical security parameter of about 100).

Of course, 100+b is much more than 3 or 1.6, so this proof is far very from applying to the KEMs' small rounding. Later works somewhat reduced the "100" in certain contexts, but not to anything close to what the KEMs use.

Moreover, several other proof techniques that provide insights about random errors do not appear to translate to deterministic rounding, especially with rings/modules (as used by the KEMs). These include the safety of using a small "secret" drawn from the error distribution; sample-preserving search-to-decision reductions; and the quantum reduction that says "solving BDD with random errors implies finding short (dual) lattice vectors".

Since known proofs tell us very little about small rounding, that leaves cryptanalysis as our only basis for evaluating its security. What has been tried? How much confidence should we have?

The analyses I've seen just heuristically model rounding error as being "random-ish" in the rounding range, then proceed with the usual kinds of lattice analyses (Core-SVP and the like).

This heuristic is plausible, but it "assumes away" any potential distinction between random and deterministic error from the outset. In other words, if there is any significant difference in security between the two kinds of error, the heuristic completely conceals it.

Therefore, I would like to ask the following questions of the community:
  • Is there any significant security gap between rounding and "same sized" random errors?
  • How well has the heuristic been tested, experimentally and analytically, for known lattice attacks?
  • Has anyone tried to develop other kinds of (quantum) attacks that specifically exploit small rounding -- including with small secrets, and over rings/modules? If so, and the attempts didn't work out, what were the obstructions? If not, how much confidence should we have in small rounding?
Sincerely yours in cryptography,
Chris

Tanja Lange

unread,
Dec 5, 2021, 7:04:45 AM12/5/21
to pqc-forum
See https://ntruprime.cr.yp.to/latticerisks-20211031.pdf for an extensive
analysis of the state of cryptanalysis and proofs for lattice-based
systems and section 5.3 in particular regarding rounding.

Tanja
> 1. SABER (a finalist) drops 3-7 bits of each integer coefficient.
> 2. NTRU Prime (an alternate) rounds to the nearest multiple of 3, effectively
> --
> You received this message because you are subscribed to the Google Groups
> "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/
> list.nist.gov/d/msgid/pqc-forum/
> CACOo0Qh2j-z6ks1m%3D8WsJK2En_92pQUkRaFgOZfubBBzxs-zJg%40mail.gmail.com.

daniel.apon

unread,
Dec 5, 2021, 7:23:58 AM12/5/21
to pqc-forum, Tanja Lange, pqc-forum
Tanja, since you have quoted me directly:

In response to the footnote 12: Section 5.3 is clearly mispresentative of the state of proofs for lattice-based systems. If you have (even an asymptotic) reduction for LWR and GapSVP matching Regev's 2004 work, I have no doubt the entire community would be quite interested.

However, raising this issue (and highlighting Section 5.3 of this paper) is wholly a distraction from the original intent of the first post in this thread. It would be extremely unfortunate and misleading to derail the original intent of this thread:
Is there enough cryptanalysis of "small rounding"?

It would be better to stay on topic.

Daniel

Christopher J Peikert

unread,
Dec 5, 2021, 8:55:53 AM12/5/21
to pqc-forum
On Sun, Dec 5, 2021 at 7:04 AM Tanja Lange <ta...@hyperelliptic.org> wrote:
See https://ntruprime.cr.yp.to/latticerisks-20211031.pdf for an extensive
analysis of the state of cryptanalysis and proofs for lattice-based
systems and section 5.3 in particular regarding rounding.

Section 5.3 (it’s short) doesn’t say *anything* about the state of cryptanalysis of rounding. If that’s an “extensive analysis” of the subject, then there must not have been *any* cryptanalysis specially devoted to rounding. If that’s true, it would answer my question from the subject line with a clear “no.”

The rest of Section 5.3 is consistent with my point that the proofs for rounding are very far from applying to the small rounding done by SABER and NTRU Prime.

To be clear: I am not suggesting that I know of some serious attack on small rounding. It would be wrong to suggest—or even worse, claim—the existence of an attack without being ready to substantiate it.

Christopher J Peikert

unread,
Jan 8, 2022, 2:05:20 PM1/8/22
to pqc-forum
Dear all, happy new year.

On Fri, Dec 3, 2021 at 2:44 PM Christopher J Peikert <cpei...@alum.mit.edu> wrote:
Summary:
  • Multiple remaining NISTPQC lattice KEM proposals rely entirely on "small (deterministic) rounding," rather than random errors, for their security.
  • This approach is relatively new -- especially the "small" aspect -- but it has received much less public attention compared to other aspects of lattice KEM designs.
  • Known proofs don't come close to applying to the KEMs' small rounding. Moreover, other proof techniques that give important insights about random errors don't apply to (small) rounding. That leaves cryptanalysis as our only real basis for confidence.
  • So, I ask: what (quantum) cryptanalytic effort has been devoted specifically to small rounding? Does it give enough confidence to use it as the foundation of a long-term PQC standard?
    • From what I can tell, there has been little specific effort -- the existing analyses essentially "assume away" any potential difference between deterministic rounding errors and random ones. Maybe this is a valid heuristic, but how well has it been tested? Are there other analyses that don't rely on this heuristic?
    • Perhaps people have made serious attempts at other types of attacks that weren't shared publicly. It would be good to know what has (and hasn't) been tried -- even/especially if it didn't amount to anything.

Since my email from five weeks ago, nobody has described any cryptanalytic effort targeting small rounding -- even unsuccessful attempts.

This is pretty concerning. If there really hasn't been any significant cryptanalysis, can we have enough confidence to rely on small rounding for long-term post-quantum security?

Daniel Masny

unread,
Jan 8, 2022, 3:50:58 PM1/8/22
to pqc-...@list.nist.gov
Hi,

I am also unaware of any cryptoanalysis.

You can obtain bounds on the hardness of the search version of small rounding over rings using the Renyi divergence. Rounded samples are close (via Renyi divergence) to rounded samples with noise (https://eprint.iacr.org/2015/769.pdf). Nevertheless, the drawback is that the distance increases quite quickly the more samples you have. For rounded public keys, where we have a fixed amount of samples, you should get some meaningful bounds. For ciphertexts you should still get some meaningful bounds since the samples have different randomness / LWE secrets, though you will lose a multiplicative factor q, where q is the amount of ciphertext queries, (unless you rerandomize ciphertexts with enough randomness via leftover hash lemma). I think these bounds should somewhat limit cryptoanalysis.

I haven't run the numbers. Is this approach already considered in the known proofs that don't come close that you have referred to?

Happy new year to you as well & everyone else!

Best
Daniel
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CACOo0QiZuYZVvbNmM2dV-t2L1Pdz2OhKtq4Qc_LJMrgQnxY2BQ%40mail.gmail.com.


Christopher J Peikert

unread,
Jan 8, 2022, 4:34:00 PM1/8/22
to Daniel Masny, pqc-forum
On Sat, Jan 8, 2022 at 3:50 PM 'Daniel Masny' via pqc-forum <pqc-...@list.nist.gov> wrote:
You can obtain bounds on the hardness of the search version of small rounding over rings using the Renyi divergence. Rounded samples are close (via Renyi divergence) to rounded samples with noise (https://eprint.iacr.org/2015/769.pdf).

I have considered these techniques, but don't see how they could say anything useful about the very small rounding employed by the remaining NIST proposals.

For example, NTRU Prime uses rounding errors from {0, +1, -1}. This results in a large Renyi divergence from "rounded noisy samples," for any nontrivial amount of noise. Do you agree, or is there a way around it?

Some details: using the notation from Theorem 3 of the linked paper, the ratio p/q is essentially 1/3. The noise bound B >= 1. The number of "rounded linear equations" (samples) m is in the hundreds. So, the loss factor in Equation (2) from Renyi alone is less than (3/5)^hundreds.

Daniel Masny

unread,
Jan 8, 2022, 7:49:13 PM1/8/22
to pqc-forum
Hi,

Yes I agree with you. Thanks for pointing this out and making it more explicit!

I don't see any way around it, the loss caused by the amount of samples is just too significant for such small rounding bounds/errors.

Best
Daniel

D. J. Bernstein

unread,
Jan 9, 2022, 11:58:21 PM1/9/22
to pqc-...@list.nist.gov
NTRU uses small noise in full-dimension rings, and has been the subject
of decades of cryptanalysis. The problem of breaking small noise in
modules of rank 3 over rings of only 1/3 dimension, as in Kyber-768, is
much newer. Is there enough cryptanalysis of rank-3 noise?

Theorems that might give a "modules are guaranteed safe" impression have
limitations rendering them logically inapplicable to NISTPQC. See, e.g.,
Section 5.5 of https://ntruprime.cr.yp.to/latticerisks-20211031.pdf.

Ergo, cryptosystems relying on noise in modules of rank 3 shouldn't be
standardized.

To be clear, I'm not endorsing the above argument; I'm using it as a
concise way to illustrate a fatal flaw in the original posting in this
thread.

---Dan

P.S. https://ntruprime.cr.yp.to/latticerisks-20211031.pdf already covers
the possibility of rounding being broken while randomly generated noise
survives, _and vice versa_. Section 5.3 of the paper observes that known
proofs don't rule out either possibility. (Of course, smallness plays a
role in the details.) The paper systematically surveys known attack
avenues against lattices (see especially Table 1.1 and Section 3), and
Section 3.5 mentions that none of these avenues would lead to rounding
being broken while noise survives or vice versa.

So I don't see how the original posting in this thread is making any new
observations. It's simply selecting a subset of the known possibilities
(and a corresponding subset of the proof limitations logically allowing
those possibilities), and using a broken methodology to hype that subset.
signature.asc

Jacob Alperin-Sheriff

unread,
Jan 10, 2022, 8:18:57 AM1/10/22
to pqc-...@list.nist.gov
In a no doubt futile effort to get this thread back on track and keep it from degenerating into the classic Bernstein v Peikert back and forth thread we all know and love …

I did spent some time thinking about the issue of small rounding parameters while at NIST and made no progress, but I didn’t fully try to apply combinations of ring-structure ideas like I probably should have. 

I also did nothing in the way of empirical testing of various lattice reduction/attack algorithms for tractable parameters on lattices formed from instances of such problems; if Albrecht et Al didn’t do that either it should be done 

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
--
-Jacob Alperin-Sheriff

Martin R. Albrecht

unread,
Jan 10, 2022, 12:33:42 PM1/10/22
to pqc-...@list.nist.gov
On Mon, Jan 10 2022, Jacob Alperin-Sheriff wrote:
> I also did nothing in the way of empirical testing of various lattice
> reduction/attack algorithms for tractable parameters on lattices formed
> from instances of such problems; if Albrecht et Al didn’t do that either it
> should be done

I can answer this one! I haven’t run such experiments.

Cheers,
Martin

--

_pgp: https://keybase.io/martinralbrecht
_www: https://malb.io
_prn: he/him or they/them

Christopher J Peikert

unread,
Jan 10, 2022, 12:59:24 PM1/10/22
to pqc-forum
On Sun, Jan 9, 2022 at 11:58 PM D. J. Bernstein <d...@cr.yp.to> wrote: 
To be clear, I'm not endorsing the above argument; I'm using it as a
concise way to illustrate a fatal flaw in the original posting in this
thread.

This is a poor analogy to (or illustration of) the issue I've raised, for the reasons given below.

NTRU uses small noise in full-dimension rings, and has been the subject
of decades of cryptanalysis.

Indeed! (Though it's not entirely clear what's meant by "full-dimension rings"; see next).
 
The problem of breaking small noise in
modules of rank 3 over rings of only 1/3 dimension, as in Kyber-768, is
much newer. Is there enough cryptanalysis of rank-3 noise?

To answer the question directly: quite possibly yes, there is enough. In any case: there is a good deal of cryptanalysis, so the analogy doesn't hold up.

As noted above, there is abundant cryptanalysis of NTRU lattices, including much that specifically relies on their rank-2 module structure over a degree-n polynomial ring, yielding 2n-dimensional lattices.

These cryptanalytic ideas can be (and have been) straightforwardly applied to rank-3 and higher-rank modules as well. Holding the lattice dimension and other parameters fixed, none of the attacks do any better as the rank increases -- and in some cases, they do worse. So, we have significant evidence that increasing the rank doesn't harm security, and none pointing in the other direction.

By contrast, nobody has yet pointed to any serious cryptanalytic effort that specifically targets small deterministic rounding. We have little idea what we might be missing.

This large difference in the amount of cryptanalysis is already enough to see why the analogy between the two cases fails. But there are differences on the theoretical side too.
 
Theorems that might give a "modules are guaranteed safe" impression have
limitations rendering them logically inapplicable to NISTPQC. See, e.g.,
Section 5.5 of https://ntruprime.cr.yp.to/latticerisks-20211031.pdf.

There's no dispute about whether the known "module theorems" give end-to-end concrete security bounds for remaining NISTPQC proposals -- they don't. However, this does not make the theorems useless for developing some confidence about the proposed uses of modules.

For one, the theorems identify good distributions of random module lattices, which are used in some of the proposals, and which have held up to cryptanalysis just as well as "unstructured" lattices have. This is in contrast to other ad-hoc distributions that have been proposed in the history of lattice cryptography, and later broken. (Similar comments apply to theorems about random errors, as noted in my original message.)

By contrast, there are no theorems that say anything nontrivial about small rounding.

This large difference in the theoretical understanding of module lattices versus small rounding is another reason why the analogy fails.
 
Ergo, cryptosystems relying on noise in modules of rank 3 shouldn't be
standardized.

This isn't a conclusion that anyone has expressed in this thread.

P.S. https://ntruprime.cr.yp.to/latticerisks-20211031.pdf already covers
the possibility of rounding being broken while randomly generated noise
survives, _and vice versa_. Section 5.3 of the paper observes that known
proofs don't rule out either possibility. (Of course, smallness plays a
role in the details.) The paper systematically surveys known attack
avenues against lattices (see especially Table 1.1 and Section 3), and
Section 3.5 mentions that none of these avenues would lead to rounding
being broken while noise survives or vice versa.

If there really hasn't been any significant cryptanalytic effort on small rounding, then looking at "known attack avenues" tells us very little. Sure, it's a useful sanity check -- though it's not clear whether even this check has been done experimentally! (Please, somebody tell me that there are at least some systematic experiments that test LLL, BKZ, etc. against small rounding... Jacob and Martin have now said that they didn't do such.)

The key question is whether there is any as-yet-unknown (quantum) attack avenue that can exploit small rounding, and what effort has been put in to try to find one.

Almost all the cryptanalytic and theoretical effort over decades has been devoted to random errors, with apparently almost none to small rounding. So, the lack of a known avenue just reflects a lack of serious effort thus far, and should not be used to put "random" and "rounding" on the same unearned footing (as Table 1.1 and Section 3.5 of the above link do), nor as a basis of confidence in small rounding.

Fernando Virdia

unread,
Jan 10, 2022, 3:17:48 PM1/10/22
to pqc-...@list.nist.gov


On 10/01/2022 18:33, 'Martin R. Albrecht' via pqc-forum wrote:
> On Mon, Jan 10 2022, Jacob Alperin-Sheriff wrote:
>> I also did nothing in the way of empirical testing of various lattice
>> reduction/attack algorithms for tractable parameters on lattices formed
>> from instances of such problems; if Albrecht et Al didn’t do that either it
>> should be done
>
> I can answer this one! I haven’t run such experiments.
>
> Cheers,
> Martin
>

Regarding empirical testing of tractable parameters for LWE instances
with secret/error coefficients in {-1,0,1}, in [1] we did run some
experiments.

In the context of that paper, our main preoccupation for those
particular experiments was testing whether LWE instances with secret and
error coefficients sampled from a uniform distribution over {-1,1} or
{-1,0,1} ("bounded uniform" distributions) would be any easier to solve
(in practice, using progressive BKZ) than LWE instances of the same
dimension and number of samples and using secret and error coefficients
sampled from discrete Gaussian distributions having the same variance as
the bounded uniform distributions.

To the extent of our experiments, our results in that context showed no
major difference between the hardness of discrete Gaussians and bounded
uniform distributions.

The parameters for those experiments were chosen purposely so that they
would have a good chance to be solved by block size 70 (we were testing
techniques for predicting the success probability of solving unique-SVP
using progressive BKZ by a certain block size). However, setting up
bigger and harder-but-solvable instances based on LWR/NTRU designs
should be straightforward.

For a detailed explanation of the experimental setup and the resulting
observations you can refer to Section 5 of [1] (Figure 6 for bounded
uniform plots). A slightly refined version of that write-up can be found
in Chapter 4 of [2].

Cheers
Fernando


[1] E. W. Postlethwaite, F. Virdia, "On the Success Probability of
Solving Unique SVP via BKZ", PKC 2021, https://eprint.iacr.org/2020/1308

[2] F. Virdia, "Post-Quantum Cryptography: Cryptanalysis and
Implementation", PhD thesis, https://fundamental.domains/2021virdiafphd.pdf
This email, its contents and any attachments are intended solely for the addressee and may contain confidential information. In certain circumstances, it may also be subject to legal privilege. Any unauthorised use, disclosure, or copying is not permitted. If you have received this email in error, please notify us and immediately and permanently delete it. Any views or opinions expressed in personal emails are solely those of the author and do not necessarily represent those of Royal Holloway, University of London. It is your responsibility to ensure that this email and any attachments are virus free.

Fernando Virdia

unread,
Jan 10, 2022, 3:22:42 PM1/10/22
to pqc-...@list.nist.gov

Christopher J Peikert

unread,
Jan 10, 2022, 3:31:46 PM1/10/22
to Fernando Virdia, pqc-...@list.nist.gov
On Mon, Jan 10, 2022 at 3:17 PM Fernando Virdia <fernando.v...@live.rhul.ac.uk> wrote:

Regarding empirical testing of tractable parameters for LWE instances
with secret/error coefficients in {-1,0,1}, in [1] we did run some
experiments.

To clarify, did these experiments test standard lattice algorithms against deterministic rounding (instead of random errors), with rounding error in {-1,0,1}? This is the kind of experiment that Jacob and Martin and I were referring to.

From your description, it sounds like the experiments were testing for any empirical difference between random errors with bounded-uniform versus Gaussian distribution. This is an interesting but different matter from deterministic rounding errors.

Sincerely yours in cryptography,
Chris


Fernando Virdia

unread,
Jan 10, 2022, 3:37:23 PM1/10/22
to Christopher J Peikert, pqc-...@list.nist.gov
As per postscript, they indeed aren't.

Fernando

On 10/01/2022 21:31, Christopher J Peikert wrote:
> On Mon, Jan 10, 2022 at 3:17 PM Fernando Virdia
> <fernando.v...@live.rhul.ac.uk
> <mailto:fernando.v...@live.rhul.ac.uk>> wrote:
>
>
> Regarding empirical testing of tractable parameters for LWE instances
> with secret/error coefficients in {-1,0,1}, in [1] we did run some
> experiments.
>
>
> To clarify, did these experiments test standard lattice algorithms
> against /deterministic rounding/ (instead of /random/ errors), with
> rounding error in {-1,0,1}? This is the kind of experiment that Jacob
> and Martin and I were referring to.
>
> From your description, it sounds like the experiments were testing for
> any empirical difference between random errors with bounded-uniform
> versus Gaussian distribution. This is an interesting but different
> matter from deterministic rounding errors.
>
> Sincerely yours in cryptography,
> Chris
>
>
> In the context of that paper, our main preoccupation for those
> particular experiments was testing whether LWE instances with secret and
> error coefficients sampled from a uniform distribution over {-1,1} or
> {-1,0,1} ("bounded uniform" distributions) would be any easier to solve
> (in practice, using progressive BKZ) than LWE instances of the same
> dimension and number of samples and using secret and error coefficients
> sampled from discrete Gaussian distributions having the same variance as
> the bounded uniform distributions.
>

D. J. Bernstein

unread,
Jan 10, 2022, 8:16:52 PM1/10/22
to pqc-...@list.nist.gov
> > The problem of breaking small noise in
> > modules of rank 3 over rings of only 1/3 dimension, as in Kyber-768, is
> > much newer. Is there enough cryptanalysis of rank-3 noise?
> To answer the question directly: quite possibly yes, there is enough. In
> any case: there is a good deal of cryptanalysis

Please identify the citations you're alluding to here. Are you claiming
that (e.g.) the Kyber submission qualifies as "enough cryptanalysis" of
rank-3 noise and that (e.g.) the SABER submission doesn't qualify as
"enough cryptanalysis" of small rounding?

> Almost all the cryptanalytic and theoretical effort over decades has been
> devoted to random errors, with apparently almost none to small rounding.

Please explain what you mean by "devoted". Are you claiming that LLL
etc., the attacks used in the original reduction experiments with NTRU
matrices, would have been capable of distinguishing randomly generated
noise from rounding, but incapable of distinguishing rank 1 from rank 3?
These were "devoted" to randomly generated noise, and not to rank 1?

---Dan
signature.asc

Christopher J Peikert

unread,
Jan 12, 2022, 2:48:36 PM1/12/22
to pqc-forum
On Mon, Jan 10, 2022 at 8:16 PM D. J. Bernstein <d...@cr.yp.to> wrote:
> > The problem of breaking small noise in
> > modules of rank 3 over rings of only 1/3 dimension, as in Kyber-768, is
> > much newer. Is there enough cryptanalysis of rank-3 noise?
> To answer the question directly: quite possibly yes, there is enough. In
> any case: there is a good deal of cryptanalysis

Please identify the citations you're alluding to here.

Here's a sampling of papers that cryptanalyze modules over rings, like rank-2 NTRU lattices. As mentioned before, most of the ideas easily generalize to rank 3 and above, with varying results. Some papers manage to usefully exploit the associated structure, for certain parameters. Others get little advantage from the module structure for the studied parameters.

https://eprint.iacr.org/2021/799 (survey; see more references within)

(For what it's worth, I also spent a good part of summer 2016 trying to exploit NTRU's "ratio of small ring elements" structure, using various algebraic approaches. But I have nothing to show for it except a bunch of over-optimistic emails... Perhaps others have similar experiences.)

Whether there is "enough cryptanalysis" and understanding of modules is a judgment call, but at least there is a good deal of it -- along with good theory.

By contrast, nobody has yet pointed out any serious effort to cryptanalyze small rounding -- indeed, we haven't even seen sanity-check experiments using standard lattice algorithms. (I don't expect such experiments to reveal any major problems, but you never know until you try.)
 
> Almost all the cryptanalytic and theoretical effort over decades has been
> devoted to random errors, with apparently almost none to small rounding.

Please explain what you mean by "devoted".

I mean that the error is modeled as random (not deterministic), experiments are run on instances having random errors, theorems and reductions apply to random errors, and the like.
 
Are you claiming that LLL

I'm not claiming anything about any algorithm here. My comments quite clearly are about the amount of cryptanalytic and theoretical effort that has been put into studying small rounding, in contrast to other structures that are present in NISTPQC lattice proposals.

Noah Stephens-Davidowitz

unread,
Jan 12, 2022, 3:18:42 PM1/12/22
to Christopher J Peikert, pqc-forum
In case it's useful for the discussion of rank 1 vs rank 2 vs rank 3: there is a theoretical reduction from SVP over modules of any rank to SVP over modules of rank 2, with an associated loss in approximation factor. . It can be found here https://eprint.iacr.org/2019/1035 and here https://eprint.iacr.org/2019/1142 .

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

D. J. Bernstein

unread,
Jan 13, 2022, 12:44:07 AM1/13/22
to pqc-forum
Christopher J Peikert writes:
> Here's a sampling of papers that cryptanalyze modules over rings, like
> rank-2 NTRU lattices.

The question was about cryptanalysis of "small noise in modules of rank
3 over rings of only 1/3 dimension, as in Kyber-768". You're switching
to a broader topic, and not making clear for readers that the papers
you've cited don't answer the original question.

> https://www.ntru.org/f/hps98.pdf

Why is pointing to cryptanalysis of NTRU as supporting the security of
Kyber-768 supposed to be more defensible than pointing to cryptanalysis
of NTRU as supporting the security of rounding?

---Dan
signature.asc

Christopher J Peikert

unread,
Jan 13, 2022, 10:47:33 AM1/13/22
to pqc-forum
On Thu, Jan 13, 2022 at 12:44 AM D. J. Bernstein <d...@cr.yp.to> wrote:
Christopher J Peikert writes:
> Here's a sampling of papers that cryptanalyze modules over rings, like
> rank-2 NTRU lattices.

The question was about cryptanalysis of "small noise in modules of rank
3 over rings of only 1/3 dimension, as in Kyber-768". You're switching
to a broader topic, and not making clear for readers that the papers
you've cited don't answer the original question.

Nope. As I said (yet you omit from your quotes), the cited analyses adapt straightforwardly from rank-2 modules to rank 3+. This is exactly the kind of cryptanalysis you asked about. The results? Fixing the total lattice dimension, increasing the rank never makes the attacks perform better, and often makes them worse.

This point has been clear since my first reply to your faulty analogy. Please stop wasting everyone's time by repeatedly missing it.
 
> https://www.ntru.org/f/hps98.pdf

Why is pointing to cryptanalysis of NTRU as supporting the security of
Kyber-768 supposed to be more defensible than pointing to cryptanalysis
of NTRU as supporting the security of rounding?

For random errors on mid-rank modules, we have substantial cryptanalysis and theory alike. For the distinguishing feature of small rounding, we apparently don't have either. In particular, the existing analyses of noisy systems (NTRU or others) aren't enough.

Why not? Because rounding yields a very narrow, structured subset of the instances produced by random errors (of the same size). The vast majority of theory and cryptanalysis has targeted the latter, broader class of instances; we have not seen anything focused on the former.

In some cases, restricting to a structured subset does not seem (after careful study) to significantly reduce security. An example is mid-rank modules versus unstructured lattices.

However, there are countless cases where such restrictions introduce serious weaknesses, sometimes catastrophic ones.

Gaining confidence about a particular restriction -- especially a newer kind like small rounding -- requires serious effort focused on its actual instances and what makes them special. By the accounts so far, this hasn't been done at all for small rounding.

D. J. Bernstein

unread,
Jan 13, 2022, 7:30:48 PM1/13/22
to pqc-...@list.nist.gov
Let's try an example: the experiments reported in Section 4.2 of
https://www.ntru.org/f/hps98.pdf (the first paper Dr. Peikert cited).
These are attacking scaled-down versions of the original NTRU system.

Why exactly is it supposed to be okay to count these experiments towards
"enough cryptanalysis" of systems that move to rank-3 modules over a
ring of only 1/3 dimension (e.g., kyber768), when it's not okay to count
these towards "enough cryptanalysis" of systems that move from randomly
generated noise to rounding (e.g., sntrup761 or ntrulpr761)?

We've heard endlessly that the move from the original NTRU system to
rounding is heuristic---but the move to a system using rank-3 modules
over a ring of only 1/3 dimension is also heuristic! A careful look
shows that there are no theorems guaranteeing that moving from NTRU to
Kyber preserves security. Security could decrease, or could increase, or
could stay the same. If proofs are supposed to be important, why are
proof limitations being selectively advertised in just one direction?
Shouldn't we be worrying about _all_ of the proof limitations?

---Dan
signature.asc

Christopher J Peikert

unread,
Jan 13, 2022, 9:53:42 PM1/13/22
to pqc-forum
On Thu, Jan 13, 2022 at 7:30 PM D. J. Bernstein <d...@cr.yp.to> wrote:
Why exactly is it supposed to be okay to count these experiments towards
"enough cryptanalysis" of systems that move to rank-3 modules over a
ring of only 1/3 dimension (e.g., kyber768), when it's not okay to count
these towards "enough cryptanalysis" of systems that move from randomly
generated noise to rounding (e.g., sntrup761 or ntrulpr761)?

I already addressed this in detail more than once, but your replies have ignored my answers. Did you receive them?
 
If proofs are supposed to be important, why are
proof limitations being selectively advertised in just one direction?

Perhaps you are replying to the wrong thread? Because this one is about the conspicuously limited cryptanalysis of small rounding, not proof limitations (which were acknowledged for both directions). It's right there in the subject line.
 
Shouldn't we be worrying about _all_ of the proof limitations?

Sure! To help measure our confidence in a system, we could consider the cryptanalytic effort that has been spent on its defining features.

Vadim Lyubashevsky

unread,
Jan 14, 2022, 3:58:30 AM1/14/22
to pqc-forum
Dear all,

> Why exactly is it supposed to be okay to count these experiments towards
> "enough cryptanalysis" of systems that move to rank-3 modules over a
> ring of only 1/3 dimension (e.g., kyber768),

Because there are simple reductions that allow one to decrease the
degree of the ring, while increasing the module rank.

If R=Z[X]/(X^1024+1), and the problem is to find s,e satisfying
As+e=t, where A is a 1x1 matrix, then it can be converted to a problem
over the ring R'=Z[X]/(X^512+1) of finding s',e' satisfying
A's'+e'=t', where A' is a 2x2 matrix. Similarly, this can be
converted into a problem over the ring R''=Z[X]/(X^256+1) of finding
s'',e'' satisfying A''s''+e''=t'', where A'' is a 4x4 matrix. The
secret and the error coefficients do not increase, but the matrices A'
and A'' are structured, rather than random.

So if NTRU is over the ring R and Module-LWE (e.g. Kyber-1024) is over
R'', what is your "concern" with the above? Is it:

1. It's possible that Module-LWE over random A'' is easy (so
Kyber-1024 is easy), but Module-LWE over a structured A'' (i.e. the
one you get from transforming A-->A'') is hard?
2. That this reduction works only for dimensions that are a power of
2, so we can't say anything about Kyber-768 which has a 3x3 matrix A
(i.e. maybe it's as hard as Kyber with a 2x2 matrix over the same
ring).

Both of these "objections", however, are counter to a lot of the
theory that underlies lattice cryptography. (1) contradicts that
random matrices A are the hard instances and (2) contradicts that
increasing the dimension makes the lattice problem harder. So if
you're going to state hypotheses that go against established theory,
you need to have some evidence. Otherwise, one can state anything
that can't be proved or disproved -- (e.g. do you have concrete proof
contradicting the Qanon and the covid vaccine conspiracies? No? Then I
guess you should consider them to be as valid as their negations.)

> Shouldn't we be worrying about _all_ of the proof limitations?

Yes, but see above. Just because 2 things can't be proved, doesn't
mean that they are equally likely to be valid.

The difference between saying (1) LWE is hard, but LWR with small
rounding could be easy and (2) NTRU is hard, but Module-LWE could be
easy is that (1) does not contradict any of the underlying theory or
understanding upon which lattice crypto is built, whereas (2) would
mean that the core of the established theory is wrong. And so (2) is
*significantly* less likely to occur because there has been a lot more
work building up lattice theory than doing NTRU cryptanalysis. In
fact, I would say that (2) is significantly less likely than even the
event that NTRU and Module-LWE are both easy.

Best,
Vadim





>
> ---Dan
>
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20220114002845.1326095.qmail%40cr.yp.to.

D. J. Bernstein

unread,
Jan 14, 2022, 9:08:48 AM1/14/22
to pqc-forum
If it's reasonable to ask about the "cryptanalytic effort" devoted
"specifically to small rounding" (see first message in this thread) then
surely it's also reasonable to ask about the "cryptanalytic effort"
devoted "specifically" to rank-3 noise, as in kyber768.

I asked "Is there enough cryptanalysis of rank-3 noise?" and was pointed
to the original NTRU paper. But that paper didn't state, analyze, or
experiment with attacks for rank-3 noise. One can trivially point to
_portions_ of the analysis that apply to rank-3 noise, but the same is
true for rounding---and, structurally, this is the opposite of asking
for cryptanalysis "specifically" of rank-3 noise or rounding.

[ re experiments in Section 4.2 of the original NTRU paper: ]
> > Why exactly is it supposed to be okay to count these experiments towards
> > "enough cryptanalysis" of systems that move to rank-3 modules over a
> > ring of only 1/3 dimension (e.g., kyber768), when it's not okay to count
> > these towards "enough cryptanalysis" of systems that move from randomly
> > generated noise to rounding (e.g., sntrup761 or ntrulpr761)?
> I already addressed this in detail more than once

Sorry, I don't see it. What's the criterion that accepts the first move
and not the second? Obviously any valid criterion has to be stated in
terms that make sense for both moves.

---Dan
signature.asc

Christopher J Peikert

unread,
Jan 14, 2022, 1:53:40 PM1/14/22
to pqc-forum
To (hopefully) wrap up my participation in this thread... I want to highlight a central point about small rounding, which may not have been totally obvious before. Then I'll summarize why its status is so different from that of module lattices.

Rounding generates "noise" in a narrow, structured way. While almost all lattice schemes since the 1990s choose their noise at random from an enormous set, rounding does something substantially different: it uses unique, deterministic noise (by rounding off a random starting lattice point).

This means that an attacker doesn't necessarily need to solve a typical kind of "noisy decoding" problem. It only needs to break the special keys/ciphertexts that have such structured noise, which are extremely unlikely to appear in a random-noise system.

This idea is only about 10 years old [link], and the focus then was on "huge rounding." For the "small rounding" as used in NISTPQC proposals:
  • We still have no theory at all.
  • We have not seen any serious cryptanalytic effort on the key question of whether structured rounding noise introduces any significant weaknesses.
  • Apparently there aren't even systematic experiments that sanity-check small rounding against standard lattice algorithms.
It's entirely plausible that this rounding heuristic doesn't harm security. But any new kind of structure needs serious analysis before we can have much confidence in it. And in the near-absence of such study, confidence should start from a low baseline.

Modules lattices also are highly structured, but their status is very different:
  • Their first use in cryptography goes back to the mid-1990s with NTRU, which has attracted a great deal of cryptanalytic attention.
  • By now there is a long line of theoretical and cryptanalytic work focusing on module structure beyond NTRU.
  • For example, work dealing with module rank above 2 goes back at least to 2002 [link] ("quasi-cyclic lattices" are module lattices).
All of this work is consistent with the following broad summary, in which we can have fairly good confidence:
  • Careful use of module structure (say, in line with what theory suggests) does not appear to introduce any significant weaknesses, as compared with unstructured lattices.
  • Increasing the module rank (while holding total dimension fixed) also does not seem to introduce any weaknesses, and in some cases actually strengthens security.
If, after reviewing this and previous messages from this thread, you still don't see why small rounding and mid-rank modules are not at all in analogous positions: then I wish you good luck!

Sincerely yours in cryptography,
Chris

D. J. Bernstein

unread,
Jan 15, 2022, 9:22:35 AM1/15/22
to pqc-forum
The beginning of this thread clearly asked "what (quantum) cryptanalytic
effort has been devoted specifically to small rounding?". There was lots
of text on how proofs don't rule out attacks specific to the rounding
submissions. (This particular observation regarding proofs is not new.)

Followups from the same source portrayed the use of rounding as "pretty
concerning" for "long-term post-quantum security", given a presumed lack
of "cryptanalysis specially devoted to rounding".

Proofs _also_ don't rule out attacks specific to the submissions using
rank-3 modules over rings of 1/3 dimension. Are we going to see a
pointer to the cryptanalytic effort devoted specifically to that case?
(Pointers to the original NTRU cryptanalysis etc. are obviously not
"devoted specifically" to this.)

Or are we going to see the use of rank-3 modules portrayed as "pretty
concerning" for "long-term post-quantum security", given a presumed lack
of cryptanalysis "specially devoted" to this case?

Obviously we've also seen various new arguments, but surely the status
of the original argument should be made clear. If the argument isn't
going to be applied consistently, it should be withdrawn for the record,
with a clear admission that it was asking the wrong question.

---Dan

P.S. There have been _dozens_ of papers during NISTPQC introducing
better lattice attacks, sometimes breaking claimed "barriers", sometimes
breaking systems outright. I am, of course, deeply concerned about the
security of the remaining lattice systems. I'm also concerned about the
way that pervasive lattice advertising selectively highlights some proof
gaps but ignores many other proof gaps; overall this exaggerates what's
covered by proofs, actively damaging awareness of lattice risks.
signature.asc

Christopher J Peikert

unread,
Jan 15, 2022, 6:29:14 PM1/15/22
to pqc-forum
Postscript:

At this point, what should we believe about the security of small rounding, and how much confidence should we have in that belief? Here is what I think.

Before this thread, I started with a mid-confidence but high-uncertainty baseline: my internal "distribution of possible outcomes" was centered at "not a lot less secure than random error," but was widely and thickly spread between "slightly more secure than random error" and "very little security at all." (In my opinion, the risk is almost all on the downside.)

Due to this thread, my level of confidence actually dropped somewhat, and my uncertainty has only increased.

My baseline was based on the assumption that some lattice/quantum/cryptanalysis experts had already made some serious-but-failed (and hence unpublished) attempts to attack small rounding. The lack of any such reports lowers my confidence from the baseline.

I expected that almost certainly someone (the relevant submitters, say) would have at least sanity-checked small rounding against standard lattice algorithms! The lack of any such reports suggests an almost total lack of real scrutiny; it both lowers my confidence and raises my uncertainty.

Overall, my best guess is that that the NISTPQC KEMs' use of small rounding doesn't introduce any major weaknesses. But I have almost no confidence that this is true. If someone found a big speedup against small rounding (say, using quantum), it would not be at all shocking to me.

Sincerely yours in cryptography,
Chris

Post-post-script: in case anyone cares...

On Sat, Jan 15, 2022 at 9:22 AM D. J. Bernstein <d...@cr.yp.to> wrote:
The beginning of this thread clearly asked "what (quantum) cryptanalytic
effort has been devoted specifically to small rounding?"...

Proofs _also_ don't rule out attacks specific to the submissions using
rank-3 modules over rings of 1/3 dimension.

This analogy was claimed to "illustrate a fatal flaw in the original posting in this thread" -- the alleged "flaw" only being selective attention to small rounding, and not any problem with the substance, facts, reasoning, etc. As far as I can tell, this has been the only objection leveled against my original post.

(I can't imagine why selective attention, even if true, would be "fatal" to my message, but let's put that aside.)

Let's consider the allegation itself. Did my message justify a specific focus on small rounding and its cryptanalysis, as opposed to modules of rank 3+?

(Note that the analogy replaces the entire class of small rounding -- whether by one, two, or a few bits -- with a specific rank parameter of 3. But let's put aside this "type mismatch" too.)

The answer is clearly yes, so the allegation is nonsense. As stated from the very start:
  • Small rounding "is relatively new -- especially the 'small' aspect -- but it has received much less public attention compared to other aspects of lattice KEM designs."
    • By contrast, modules in lattice cryptography go back to the mid-1990s, and rank-3+ modules to 2002, with a long line of study since then. Also, modules have received substantial attention during NISTPQC, whereas small rounding has received very little (until this thread).
    • This distinction alone -- completely irrespective of proofs -- is already plenty of reason why small rounding deserves specific attention.
  • "Known proofs don't come close to applying to the KEMs' small rounding." Specifically, the proofs apply only to "huge rounding," and often have other serious limitations too.
    • By contrast, proofs do apply to mid-rank modules, including rank 3+.
    • To be absolutely clear, the theorems don't say "Kyber-768 is secure," but they do give nontrivial guarantees about mid-rank modules, and Kyber's design is guided by these theorems.
    • In short, we have a solid theory for modules, but no theory at all for small rounding.
    • "That leaves cryptanalysis as our only real basis for confidence" in small rounding. Hence the question about it.

Fre

unread,
Jan 28, 2022, 7:28:12 PM1/28/22
to pqc-forum
Dear all

To address the explicit request of Chris for sanity checks on small rounding vs random error, we can report that we did run such experiments.  In particular, we tested the success probability of running BKZ with varying block sizes to recover the secret key of rank 2 module-LWE with random noise vs with rounded noise on small instances (which gives a high enough probability for BKZ to succeed).

The conclusion of our tests is rather unsurprising: we could not find any statistically relevant difference between the success probabilities for both instances for all block sizes and number of samples.

Of course we are very much interested to hear any possible attack avenues that would be able to exploit rounding, but so far, there has not even been any suggestion in this direction.

Chris also states that ''Rounding generates "noise" in a narrow, structured way.'', but it is unclear what is meant with narrow?  Does narrow refer to the resulting noise?

It would be interesting to hear people's opinions as to what is perceived as 'the main weakness' of rounding (if there is such a thing): is it the fact that the noise is generated deterministically or is it more specifically that is obtained as a result of rounding?  As a thought experiment: do people think it would be more secure to deterministically generate a binomial noise (with the same variance of course) where the coefficients of the lattice point A^t*s are used as seed (instead of rounding)?

Best regards

The SABER team

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

D. J. Bernstein

unread,
Jan 29, 2022, 7:13:03 AM1/29/22
to pqc-...@list.nist.gov
Fre writes:
> The conclusion of our tests is rather unsurprising: we could not find any
> statistically relevant difference between the success probabilities for
> both instances for all block sizes and number of samples.

To add some further confirmation here: I ran various hybrid-attack
experiments as part of the work that led to my pqc-forum messages dated
30 May 2020 02:15:31 +0200 and 7 Jul 2020 19:06:42 +0200. The rounding
case behaved just like the random-noise case, as anyone familiar with
the existing attack analyses would expect.

Hybrid attacks definitely _don't_ behave the same way as non-hybrid
attacks, so it's troubling that the Kyber documentation doesn't analyze
hybrid attacks. The Kyber noise distribution is narrow, mostly ternary;
the idea that tweaking a ternary distribution rules out hybrid attacks
isn't correct. One can search or MITM almost as long a stretch of error
positions for Kyber as one can for traditional ternary errors.

The hybrid threat is quantitatively smaller for SABER, which has a
distribution that's noticeably wider. NTRU and NTRU Prime deal with the
threat in a different way, keeping low noise but (1) including hybrid
attacks in the analysis and (2) taking larger dimensions to compensate
for the low noise. Kyber is in the maximally uncomfortable position of
taking low noise _and_ low dimensions while not even acknowledging, let
alone analyzing, the applicability of hybrid attacks.

If we're supposed to worry about KEMs using rounding, a feature that has
never created security problems, then shouldn't we be much more worried
about Kyber using low noise without even analyzing _known_ attacks that
exploit low noise? Can we have enough confidence to rely on Kyber's new
error distribution for long-term post-quantum security?

To be clear, the bigger question here is regarding the standards being
applied in this thread regarding risk management and the allocation of
valuable cryptanalytic time.

> Of course we are very much interested to hear any possible attack avenues
> that would be able to exploit rounding, but so far, there has not even been
> any suggestion in this direction.

Here's something that could be misunderstood as being in this direction.

For the case of rounding to something that doesn't divide the original
modulus, the NTRU Prime documentation describes the ciphertext-dependent
amount of information provided regarding each vector position. In short,
there are usually 3 possibilities, but _occasionally_ there are 2. This
makes a tiny difference in the overall size of the error vector.

(For SABER, rounding is to a divisor of the original modulus, so the
analogous differences are zero.)

The reason this isn't suggesting a rounding break without a noise break
is that replacing deterministic rounding with random noise of the same
widths would have the same effect on existing attack analyses. What
matters for these analyses is how big the error vector is.

Readers bothered by the probabilistic variation should note that the
Kyber error vector also varies in size from one ciphertext to another,
so some ciphertexts are easier to find than others by these attacks.
There's inherently a probabilistic question of what the success
probability is if the attacker can afford some fraction of the
"expected" attack time.

Kyber's attack analysis emphasizes the _typical_ size of the error
vector; as the NTRU Prime submission notes, this could overestimate or
underestimate security. The size variation is quantitatively less of an
issue for NTRU Prime, which uses fixed weight for the non-rounding part
of the error vector. (SABER error vectors also vary in size, obviously.)

Anyone looking at the attack analyses sees more and more questions to
investigate---none of which point to any way to break rounding without
breaking randomly generated noise, or vice versa. Proofs aren't helpful
here: they don't rule out the possibility of rounding being weaker, and
they also don't rule out the possibility of random noise being weaker.

Note that I'm focusing here on mathematical attacks. From a broader
security perspective, the available evidence favors rounding. There's a
documented pattern of implementors destroying security by botching the
generation of random noise, and rounding eliminates some of the known
failure modes while not adding any. See generally the "Footguns" thread
on pqc-forum starting 28 Aug 2019 13:54:59 +0000.

---Dan
signature.asc

Christopher J Peikert

unread,
Jan 31, 2022, 5:04:17 PM1/31/22
to Fre, pqc-forum
On Fri, Jan 28, 2022 at 7:28 PM Fre <frederik.v...@gmail.com> wrote:
To address the explicit request of Chris for sanity checks on small rounding vs random error, we can report that we did run such experiments.  In particular, we tested the success probability of running BKZ with varying block sizes to recover the secret key of rank 2 module-LWE with random noise vs with rounded noise on small instances (which gives a high enough probability for BKZ to succeed).

The conclusion of our tests is rather unsurprising: we could not find any statistically relevant difference between the success probabilities for both instances for all block sizes and number of samples.

Thanks for running these sanity checks (is the code publicly available anywhere?). Indeed, this is the expected outcome -- BKZ wasn't designed to exploit rounding -- but it's good to have confirmation. I think this should be seen as a first step in the serious cryptanalysis of small rounding.
 
Chris also states that ''Rounding generates "noise" in a narrow, structured way.'', but it is unclear what is meant with narrow?  Does narrow refer to the resulting noise?

While small rounding does produce "narrow" noise (by definition), here I intended a different meaning, as elaborated upon in the next sentence: "While almost all lattice schemes ... choose their noise at random from an enormous set, rounding ... uses unique, deterministic noise".

To be more explicit, "narrow" refers to the use of unique deterministic noise, instead of random from a huge space. And "structured" reflects that the noise is produced simply by rounding off a lattice point, as opposed to some more complicated (e.g., pseudorandom) procedure.

These two properties are basically orthogonal -- we can imagine methods having either one and not the other -- but small rounding has both.
 
It would be interesting to hear people's opinions as to what is perceived as 'the main weakness' of rounding (if there is such a thing)

It's probably clear from my previous messages, but in my view, the main issue is that small rounding has apparently not been studied much at all in its short lifetime, so it is not yet deserving of much confidence.

In summary: we have no theory; we have only some sanity-check experiments (using algorithms that weren't designed with rounding in mind); and we apparently have no serious attempts to attack rounding specifically. But it does have a good pedigree as something "similar to" random noise, so partial credit there at least.
 
is it the fact that the noise is generated deterministically or is it more specifically that is obtained as a result of rounding?  As a thought experiment: do people think it would be more secure to deterministically generate a binomial noise (with the same variance of course) where the coefficients of the lattice point A^t*s are used as seed (instead of rounding)?

If small rounding has some particular vulnerability, my best guess is that it would be most severe with the combination of determinism and simple noise derivation.

Using the lattice point as the seed to a cryptographic PRG for generating the noise is an interesting idea; this method would be "narrow but unstructured," in the above terminology. I can't venture a guess as to whether that would "rescue" rounding in this hypothetical world.
 
Of course we are very much interested to hear any possible attack avenues that would be able to exploit rounding, but so far, there has not even been any suggestion in this direction.

Here is a potential direction, which probably won't pan out (and definitely doesn't work with just the pieces I'll describe), but things of this nature should be carefully considered.

In 2003, Regev [link] gave a quantum reduction from lattice problems to the dihedral coset (a.k.a. hidden-shift) problem with a "failure parameter," which specifies the probability that a given sample is arbitrary "junk" instead of a valid coset state.

For certain parameters, Kuperberg's algorithm can solve this DCP-with-failure problem in subexponential exp(sqrt{log N}) time, where 2N is the order of the dihedral group. Note that to make the junk probability small enough when attacking n-dimensional lattice problems using this approach, one seemingly needs to take log N=Omega(n^2), so the ultimate result is just an exponential-time quantum algorithm.

For BDD-type problems (like rounding), Regev's reduction works essentially by preparing a certain quantum superposition over lattice points and a given shift, then measuring it. This yields either a valid coset state, or "junk." Importantly, the state is valid if the measured lattice point and its corresponding shift are both in the same "rounding cube"; otherwise, it is junk.

In the setting of rounding, we are guaranteed by construction that the rounded target point and its nearest lattice vector are in the same rounding box. By contrast, this will frequently not be the case when using random error.

Can we somehow use this distinction to get a lower probability of junk, and thereby a concretely or asymptotically better attack? I don't immediately see how, because the entire superposition matters, not just the given target and its nearest lattice vector. But I also haven't tried many ideas here. Can we apply similar techniques directly to small rounding itself, rather than going through reductions? I haven't pursued this at all, but it seems worthy of some effort.

Christopher J Peikert

unread,
Jan 31, 2022, 5:47:11 PM1/31/22
to pqc-forum
On Sat, Jan 29, 2022 at 7:13 AM D. J. Bernstein <d...@cr.yp.to> wrote:
If we're supposed to worry about KEMs using rounding, a feature that has
never created security problems

"Never created security problems" doesn't mean much in this case, since small rounding is rather new (appearing for the first time in NISTPQC submissions!), and we have seen almost no serious cryptanalytic effort to take advantage of it.

No new, special structure in crypto has ever created any security problem -- until someone actually tried to exploit it, and succeeded.

Anyone looking at the attack analyses sees more and more questions to
investigate---none of which point to any way to break rounding without
breaking randomly generated noise, or vice versa.

This also means little, since the known attacks were designed with random noise in mind, not rounding.

To be clear, the bigger question here is regarding the standards being
applied in this thread regarding risk management and the allocation of
valuable cryptanalytic time.

The allegation that this thread represents some kind of selective scrutiny or double standard has been conclusively debunked; see the second half of [link].

As for "allocation of valuable cryptanalytic time": are you saying that it would be a misallocation for people to put serious effort into cryptanalyzing small rounding? That would be a baffling position.

Two of the seven finalist/alternate KEMs are no more secure than their underlying small-rounding problems: breaking the latter directly breaks the schemes. How could it be a misallocation to devote cryptanalytic effort to these problems, especially when the current amount appears to be near zero?
 
Proofs aren't helpful
here: they don't rule out the possibility of rounding being weaker, and
they also don't rule out the possibility of random noise being weaker.

As Vadim correctly noted, "proofs don't rule out X nor Y" does not mean that X and Y are equally (im)plausible. The overall amount of scrutiny, understanding, theory about the relevant structures, etc., should all factor into the evaluation.

Jacob Alperin-Sheriff

unread,
Jan 31, 2022, 6:26:44 PM1/31/22
to Christopher J Peikert, pqc-forum
I would add that unless I misremember Kyber (or it got tweaked a bunch), this part makes no sense for a reason Chris did not mention.
If we're supposed to worry about KEMs using rounding, a feature that has
never created security problems, then shouldn't we be much more worried
about Kyber using low noise without even analyzing _known_ attacks that
exploit low noise? Can we have enough confidence to rely on Kyber's new
error distribution for long-term post-quantum security?


Kyber rounds first, then adds low random noise. Again I haven't looked at the scheme in over 2 years, but if we can have confidence in rounding, then Kyber's error distribution doesn't really matter ... (or is this in the 'public key' part of the KEM)
 

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.


--
-Jacob Alperin-Sheriff

Christopher J Peikert

unread,
Jan 31, 2022, 8:29:24 PM1/31/22
to Jacob Alperin-Sheriff, pqc-forum
On Mon, Jan 31, 2022 at 6:26 PM Jacob Alperin-Sheriff <jaco...@gmail.com> wrote:
I would add that unless I misremember Kyber (or it got tweaked a bunch), this part makes no sense for a reason Chris did not mention.
If we're supposed to worry about KEMs using rounding, a feature that has
never created security problems, then shouldn't we be much more worried
about Kyber using low noise without even analyzing _known_ attacks that
exploit low noise? Can we have enough confidence to rely on Kyber's new
error distribution for long-term post-quantum security?


Kyber rounds first, then adds low random noise.

It’s swapped: Kyber first adds random error, then rounds down somewhat. But the point basically remains the same: both kinds of error seem to contribute to hardness. For attacks, it’s not enough to consider only the random noise and not the additional rounding noise.

Peter Schwabe

unread,
Feb 1, 2022, 2:06:04 AM2/1/22
to Christopher J Peikert, Jacob Alperin-Sheriff, pqc-forum
Christopher J Peikert <cpei...@alum.mit.edu> wrote:

Dear Chris, dear all,
This is strictly speaking correct, but might still be misunderstood. Of
course for *attacks* both kinds of noise contribute to the hardness.

However, the security analysis of Kyber-768 (NIST level 3) and
Kyber-1024 (NIST level 5) is more conservative and relies entirely on
the random (LWE) noise. Only in the analysis of Kyber-512 we also
present numbers that additionally take into account the noise introduced
by rounding. However, we *also* present the number that is based solely
on random noise. The difference between these two is 6 bits of classical
security.

Cheers,

Peter
signature.asc

D. J. Bernstein

unread,
Feb 1, 2022, 7:57:53 AM2/1/22
to pqc-...@list.nist.gov
Christopher J Peikert writes:
> It’s swapped: Kyber first adds random error, then rounds down somewhat. But
> the point basically remains the same: both kinds of error seem to
> contribute to hardness. For attacks, it’s not enough to consider only the
> random noise and not the additional rounding noise.

Indeed, the round-2 Kyber submission claimed that the Kyber security
estimate was "a very conservative lower bound on the cost of an actual
attack" (which in turn played a critical role in the claim that round-2
Kyber-512 was as hard to break as AES-128), and, as a specific reason
for this, claimed that the costs of an actual attack included dealing
with "the additional rounding noise" in ciphertexts.

_However_, the claim that attacks have to contend with rounding noise is
an outright error in the round-2 Kyber security analysis. The standard
_key-recovery_ attacks are _not_ faced with rounding noise.

The actual situation is that the attacker has multiple attack targets.
For attacks against LPR, the analyses of the standard key-recovery
attacks targeting A = aG+e have important differences from the analyses
of the standard ciphertext attacks targeting B = Gb+d, C = M+Ab+c:

* The standard key-recovery attacks are limited to n samples from A.
These attacks are also applicable to original Quotient NTRU. These
attacks are _not_ affected by Kyber's ciphertext rounding.

* The standard ciphertext attacks can use as many as 2n samples,
namely n from B and n from C. Attacks using more than n samples are
_not_ applicable to Quotient NTRU (contrary to the myth that LPR
has been proven to be as secure as Quotient NTRU). These attacks
_are_ affected by Kyber's ciphertext rounding.

Table 4 of the round-2 Kyber submission lists a Kyber-512 attack using
only 410 samples. That's below 512 samples. Such an attack can and
should focus on the key. For such a key-recovery attack, it is simply
_not true_ that the attack has to contend with Kyber's "additional
rounding noise".

The "additional rounding noise" statement was also in the round-1 Kyber
submission document. Round-1 Kyber was different in that it _did_ have
some rounding in the keys (although the statement refers only to "noise
introduced in ciphertexts"). Apparently an important step in the
security analysis for a bleeding-edge cryptosystem wasn't re-checked
when the cryptosystem was modified in the middle of the NISTPQC process.

Round-3 Kyber-512 is yet another new cryptosystem, with ciphertext noise
narrower than the key noise, so there are really three separate cases in
the analysis:

* There are the standard key-recovery attacks. These are limited to
<=n samples. As in the case of round-2 Kyber, these attacks against
round-3 Kyber are _not_ faced with any rounding noise.

* There are the standard ciphertext attacks using <=n samples.
Compared to the key-recovery attacks, these are faced with rounding
but narrower errors.

* There are the standard ciphertext attacks using >n samples---the
types of standard attacks that don't apply to Quotient NTRU. These
are again faced with rounding but narrower errors.

In short, round-3 Kyber, like round-2 Kyber, does _not_ put any rounding
noise in the way of the standard key-recovery attacks.

We hear endlessly about how "conservative" it is to use Kyber security
estimates that ignore the rounding in ciphertexts, but where are the
security estimates for _key-recovery_ attacks that aren't faced with
rounding to begin with?

Perhaps the key-recovery estimates are noticeably higher than the
ciphertext attack estimates, or perhaps not. Either way, isn't it
important for the submission to present the numbers, to explicitly alert
people that the "conservative" aspect of ignoring rounding _does not
apply_ to these key-recovery attacks, and to issue clear errata
regarding previous claims to the contrary?

It's not that this is a newly identified mistake. My pqc-forum email
dated 30 May 2020 02:15:31 +0200 pointed out a number of problems with
the round-2 Kyber security analysis, including exactly this mistake.
(Look for "ciphertext rounding" in that message.)

The round-3 Kyber submission didn't include proper errata for those
problems. It introduced a new patched cryptosystem and a new security
analysis; that's not the same as putting up warning signs admitting
exactly what was wrong with the old analysis.

So it's easy for people evaluating round-2 or round-3 Kyber to fall into
the trap of still believing the old claims---thinking that any attack
analysis looking just at the random noise is an underestimate, since
there's also "additional rounding noise".

---D. J. Bernstein
signature.asc

Christopher J Peikert

unread,
Feb 1, 2022, 12:19:33 PM2/1/22
to Peter Schwabe, Jacob Alperin-Sheriff, pqc-forum
On Tue, Feb 1, 2022 at 2:05 AM Peter Schwabe <pe...@cryptojedi.org> wrote:

> It’s swapped: Kyber first adds random error, then rounds down somewhat. But
> the point basically remains the same: both kinds of error seem to
> contribute to hardness. For attacks, it’s not enough to consider only the
> random noise and not the additional rounding noise.

This is strictly speaking correct, but might still be misunderstood. Of
course for *attacks* both kinds of noise contribute to the hardness.

However, the security analysis of Kyber-768 (NIST level 3) and
Kyber-1024 (NIST level 5) is more conservative and relies entirely on
the random (LWE) noise. Only in the analysis of Kyber-512 we also
present numbers that additionally take into account the noise introduced
by rounding. However, we *also* present the number that is based solely
on random noise. The difference between these two is 6 bits of classical
security.

Indeed. This is why, from the start of this thread, I did not include Kyber among the two (out of nine, not seven as I previously said) finalists/alternates whose security relies entirely upon small rounding.

This thread is about small rounding and its (lack of) cryptanalysis; criticism of Kyber's (or any other scheme's) random noise is off-topic and should go elsewhere.

D. J. Bernstein

unread,
Feb 1, 2022, 12:55:35 PM2/1/22
to pqc-...@list.nist.gov
Christopher J Peikert writes:
> criticism of Kyber's (or any other scheme's) random noise is off-topic

Applying procedures to examples beyond the example at hand is a standard
way, often the easiest way, to see and report flaws in the procedures.
Declaring the other examples off-topic doesn't make them so.

---Dan
signature.asc

Fre

unread,
Feb 4, 2022, 6:57:10 AM2/4/22
to Christopher J Peikert, pqc-forum

Thanks for running these sanity checks (is the code publicly available anywhere?). Indeed, this is the expected outcome -- BKZ wasn't designed to exploit rounding -- but it's good to have confirmation. I think this should be seen as a first step in the serious cryptanalysis of small rounding.


The script sanity_check.sage takes 3 inputs: 
- n: the dimension of the base ring Z_q[x]/(x^n+1) 
- q: the modulus  
- numtests: the number of tests to be performed.

It then runs BKZ for different block sizes and number of samples for a rank 2 module problem over Z_q[x]/(x^n+1).  It rounds to multiples of 8.  At the end it produces an overview of the success probability, e.g. stuff of the form:

n 16 q 128 rounding False m 32 d 65 beta 20 percentage 0.796700000 numinputs 10000
n 16 q 128 rounding True m 32 d 65 beta 20 percentage 0.797400000 numinputs 10000

which means that 79.67% was broken for random noise vs 79.74% for rounding on 10k tests (and this repeated for other block sizes / samples as well).

Best

Fre
 

D. J. Bernstein

unread,
Feb 5, 2022, 7:29:00 AM2/5/22
to pqc-...@list.nist.gov
I wrote:
> _However_, the claim that attacks have to contend with rounding noise is
> an outright error in the round-2 Kyber security analysis. The standard
> _key-recovery_ attacks are _not_ faced with rounding noise.

To assist readers in tracking what's known, are we going to see errata
issued for the false claims to the contrary earlier in this thread?

Also, are we going to see a statement from the Kyber team regarding what
security level round-3 Kyber claims against key-recovery attacks? It
seems particularly important to quantify this, given (1) the widespread
perception that Kyber's rounding is an extra defense and (2) the fact
that this defense _does not_ apply to the standard key-recovery attacks
against Kyber.

---D. J. Bernstein

P.S. The Kyber team email dated 4 Jun 2020 17:21:31 +0200 did properly
acknowledge the rounding mistake in the round-2 Kyber security analysis
in response to my pointing it out---but clearly that wasn't sufficient;
we see the same mistake showing up again in 2022. The Kyber round-3
submission should have included an erratum and appropriate credit.
signature.asc
Message has been deleted

Moody, Dustin (Fed)

unread,
Oct 25, 2022, 9:54:01 PM10/25/22
to Fatima ASEBRIY, pqc-forum, cpei...@alum.mit.edu
Please see NISTIR 8413: Status Report on the 3rd Round of the NIST PQC Standardization Process


In particular see Section 4.3.4, which explains our decision regarding Saber.


Dustin Moody
NIST


From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> on behalf of Fatima ASEBRIY <fatima....@gmail.com>
Sent: Tuesday, October 25, 2022 6:25 PM
To: pqc-forum <pqc-...@list.nist.gov>
Cc: cpei...@alum.mit.edu <cpei...@alum.mit.edu>
Subject: [pqc-forum] Re: Is there enough cryptanalysis of "small rounding"?
 
hello 
why saber is not selected among the candidates of round 4 (nist)
Le vendredi 3 décembre 2021 à 20:44:39 UTC+1, cpei...@alum.mit.edu a écrit :
Summary:
  • Multiple remaining NISTPQC lattice KEM proposals rely entirely on "small (deterministic) rounding," rather than random errors, for their security.
  • This approach is relatively new -- especially the "small" aspect -- but it has received much less public attention compared to other aspects of lattice KEM designs.
  • Known proofs don't come close to applying to the KEMs' small rounding. Moreover, other proof techniques that give important insights about random errors don't apply to (small) rounding. That leaves cryptanalysis as our only real basis for confidence.
  • So, I ask: what (quantum) cryptanalytic effort has been devoted specifically to small rounding? Does it give enough confidence to use it as the foundation of a long-term PQC standard?
    • From what I can tell, there has been little specific effort -- the existing analyses essentially "assume away" any potential difference between deterministic rounding errors and random ones. Maybe this is a valid heuristic, but how well has it been tested? Are there other analyses that don't rely on this heuristic?
    • Perhaps people have made serious attempts at other types of attacks that weren't shared publicly. It would be good to know what has (and hasn't) been tried -- even/especially if it didn't amount to anything.
All lattice-based encryption/KEM schemes rely on adding some kind of "error" for security. Without such error, the schemes would be trivially breakable.

Ten years ago, coauthors and I proposed the idea of using deterministic "rounding error" instead of random error in lattice-based constructions. This yields the "with rounding" family of problems, like Learning With Rounding (over Rings/Modules) etc.

The main idea is simple: in order to add error to an integer, just round it to the nearest multiple of some fixed integer p. This can be seen as adding some "deterministic error" in the range [-p/2, p/2]. And since the result r is a multiple of p, we can instead work with r/p. So, this rounding effectively "drops" the least-significant log_2(p) bits of the input.

While we and others managed to prove some things about the hardness of "with rounding" problems, the proofs require much more rounding than what the NISTPQC KEMs do.

More specifically, two KEMs rely entirely on small rounding for their security:
  1. SABER (a finalist) drops 3-7 bits of each integer coefficient.
  2. NTRU Prime (an alternate) rounds to the nearest multiple of 3, effectively dropping just log_2(3) < 1.6 bits per coefficient.
(In addition, finalist Kyber uses random error, then also does some rounding for compression; both forms of error are accounted for in the security analysis.)

This is much less rounding than what was originally considered. For example, our work showed that dropping 100+b bits is at least as secure as using random error of about b bits (for a statistical security parameter of about 100).

Of course, 100+b is much more than 3 or 1.6, so this proof is far very from applying to the KEMs' small rounding. Later works somewhat reduced the "100" in certain contexts, but not to anything close to what the KEMs use.

Moreover, several other proof techniques that provide insights about random errors do not appear to translate to deterministic rounding, especially with rings/modules (as used by the KEMs). These include the safety of using a small "secret" drawn from the error distribution; sample-preserving search-to-decision reductions; and the quantum reduction that says "solving BDD with random errors implies finding short (dual) lattice vectors".

Since known proofs tell us very little about small rounding, that leaves cryptanalysis as our only basis for evaluating its security. What has been tried? How much confidence should we have?

The analyses I've seen just heuristically model rounding error as being "random-ish" in the rounding range, then proceed with the usual kinds of lattice analyses (Core-SVP and the like).

This heuristic is plausible, but it "assumes away" any potential distinction between random and deterministic error from the outset. In other words, if there is any significant difference in security between the two kinds of error, the heuristic completely conceals it.

Therefore, I would like to ask the following questions of the community:
  • Is there any significant security gap between rounding and "same sized" random errors?
  • How well has the heuristic been tested, experimentally and analytically, for known lattice attacks?
  • Has anyone tried to develop other kinds of (quantum) attacks that specifically exploit small rounding -- including with small secrets, and over rings/modules? If so, and the attempts didn't work out, what were the obstructions? If not, how much confidence should we have in small rounding?
Sincerely yours in cryptography,
Chris

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

fei 123

unread,
Oct 26, 2022, 4:51:18 PM10/26/22
to Fatima ASEBRIY, pqc-forum, cpei...@alum.mit.edu

> Moreover, several other proof techniques that provide insights about random errors do not appear to translate to deterministic rounding, especially with rings/modules (as used by the KEMs). These include the safety of using a small "secret" drawn from the error distribution; sample-preserving search-to-decision reductions; and the quantum reduction that says "solving BDD with random errors implies finding short (dual) lattice vectors".

What do you exactly mean by "solving BDD with random errors implies finding short (dual) lattice vectors" ?

Christopher J Peikert

unread,
Oct 26, 2022, 5:26:37 PM10/26/22
to fei 123, Fatima ASEBRIY, pqc-forum
On Wed, Oct 26, 2022 at 4:51 PM fei 123 <feip...@gmail.com> wrote:
Hi Chris,

> Moreover, several other proof techniques that provide insights about random errors do not appear to translate to deterministic rounding, especially with rings/modules (as used by the KEMs). These include the safety of using a small "secret" drawn from the error distribution; sample-preserving search-to-decision reductions; and the quantum reduction that says "solving BDD with random errors implies finding short (dual) lattice vectors".

What do you exactly mean by "solving BDD with random errors implies finding short (dual) lattice vectors" ?

Hi, I mean the "from [BDD] to samples" part of Regev's paper on LWE, Section 3.2.2. (He calls the problem CVP, but it has since come to be called BDD to emphasize the distance guarantee, which CVP does not have.) It can also be combined with Section 3 of this paper, when considering random BDD errors.

Sincerely your in cryptography,
Chris

Daniel Apon

unread,
Oct 26, 2022, 5:29:07 PM10/26/22
to fei 123, Fatima ASEBRIY, pqc-forum, cpei...@alum.mit.edu
Hello fei 123,

You wrote: "For Saber, I am a bit confused with the statement "The disadvantage of power of 2 moduli is that they do not allow an NTT implementation of polynomial multiplication." ?"

The Number Theoretic Transform (NTT) is an algorithm. The NTT is an exact-arithmetic version of the (otherwise, complex-valued) FFT used in cryptographic applications. The Fast Fourier Transform (FFT) is an algorithm for evaluating the traditional Discrete Fourier Transform (DFT). The development of fast algorithms for the DFT can be traced to Gauss's work in 1805, but the first published version was in 1965 by James Cooley (an IBM Researcher) and John Tukey (who invented the term "bit" in 1947 at Bell Labs).

The NTT is sometimes used to compute polynomial multiplications much more efficiently than otherwise. For example, the NTT is used in Kyber. The algorithm requires certain properties on the modulus -- in particular, the choice of a specific prime integer-modulus (not a power -f-2 integer-modulus).

Best regards,
--Daniel Apon

On Wed, Oct 26, 2022 at 4:51 PM fei 123 <feip...@gmail.com> wrote:

Daniel Apon

unread,
Oct 26, 2022, 5:30:30 PM10/26/22
to fei 123, Fatima ASEBRIY, pqc-forum, cpei...@alum.mit.edu
Typo:

" power -f-2 integer-modulus" => " power-of-2 integer-modulus"

Sujoy Sinha Roy

unread,
Oct 26, 2022, 6:14:19 PM10/26/22
to Daniel Apon, fei 123, Fatima ASEBRIY, pqc-forum, cpei...@alum.mit.edu
There are several NTT-based polynomial implementations of Saber on general purpose CPUs and hardware platforms. Saber didn't fix its protocol specification to any special polynomial multiplication algorithm. NTT in Saber requires working with a larger prime (e.g., 23 bit) than the 13-bit modulus which is not as fast as working with a 13-bit NTT. At the same time, having generic polynomial multiplication might be a good decision if in the future there are easy to perform fault or side-channel attacks that work very effectively when using a specific type of polynomial multiplication algorithm. 

Here are two papers on using NTT for Saber's polynomial multiplication.

"NTT Multiplication for NTT-unfriendly Rings: New Speed Records for Saber and NTRU on Cortex-M4 and AVX2"  -- IACR TCHES 2021.

"A Unified Cryptoprocessor for Lattice-based Signature and Key-exchange" -- IEEE Transactions on Computers 2022.
https://eprint.iacr.org/2021/1461

The last work uses the polynomial multiplier circuit of Dilithium (without any modification) to compute polynomial multiplications of Saber's specification. 

Regards
Sujoy
 





Peralta, Rene C. (Fed)

unread,
Oct 26, 2022, 7:34:38 PM10/26/22
to pqc-forum
Multiplicative group not cyclic?

René.





From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> on behalf of fei 123 <feip...@gmail.com>
Sent: Wednesday, October 26, 2022 1:50 PM
To: Fatima ASEBRIY <fatima....@gmail.com>
Cc: pqc-forum <pqc-...@list.nist.gov>; cpei...@alum.mit.edu <cpei...@alum.mit.edu>
Subject: Re: [pqc-forum] Re: Is there enough cryptanalysis of "small rounding"?
 


From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> on behalf of fei 123 <feip...@gmail.com>
Sent: Wednesday, October 26, 2022 1:50 PM
To: Fatima ASEBRIY <fatima....@gmail.com>
Cc: pqc-forum <pqc-...@list.nist.gov>; cpei...@alum.mit.edu <cpei...@alum.mit.edu>
Subject: Re: [pqc-forum] Re: Is there enough cryptanalysis of "small rounding"?
 
Reply all
Reply to author
Forward
0 new messages