1,886 views

Skip to first unread message

Aug 20, 2021, 4:23:25 PM8/20/21

to pqc-...@list.nist.gov

Just gave a talk on algorithms getting (heuristically+experimentally)

within ε of the shortest nonzero vector for cyclotomic Ideal-SVP,

breaking through every previously claimed lower bound. Joint work with

Kirsten Eisenträger, Tanja Lange, Karl Rubin, Alice Silverberg, and

Christine van Vredendaal. https://cr.yp.to/talks.html#2021.08.20

---Dan

within ε of the shortest nonzero vector for cyclotomic Ideal-SVP,

breaking through every previously claimed lower bound. Joint work with

Kirsten Eisenträger, Tanja Lange, Karl Rubin, Alice Silverberg, and

Christine van Vredendaal. https://cr.yp.to/talks.html#2021.08.20

---Dan

Aug 24, 2021, 12:29:20 AM8/24/21

to pqc-forum, D. J. Bernstein, pqc-...@list.nist.gov

Hi Dan (speaking for myself),

This seems like a very excellent talk, and very timely.

Apologies for the slow reply; I've been digesting the material over the weekend. Two things:

1) It's worth pointing out first that, assuming the claims are verified and accepted (more or less as-is) by the community, the impact is a very significant bit-security loss (or at least an extremely significant step toward a very significant bit-security loss..) for the following NIST PQC R3 Finalists:

- Kyber

- Saber

- NTRU

- Dilithium

- Falcon

As such, these results would (potentially but inherently) majorly affect the final decision process for NIST PQC before standardization decisions are made.

To wit, schemes in prior rounds have been excluded from progressing to subsequent rounds due to weaker attacks than the claimed attack here.

2) While people are likely still in the process of analyzing the claimed outcomes, I'd like to ask a few clarifying questions to help everyone out (enumerated below with i), ii), etc.)..

The ultimate claim is: "eps-Approx-SVP in Power-of-2 Ideal Lattices can be solved in classical time ~exp(n^(1/2))" for (essentially) any very small eps.

__i) Does this main claim sound about right?__

If so, this seems to not 'invalidate' the reduction-based security proofs for Power-of-2 Cyclotomic style cryptosystems, but rather it makes the consequence of such reductions for real world security.. well, mostly ineffectual (even worthless?).

E.g. "my cryptosystem is at least as hard to break as This Problem; however, This Problem turned out to be much easier than I expected"

__ii) Do you have in mind how to translate such an attack (if it's as effective as claimed) into a key recovery attack against the Finalists listed above?__

(It is worth noting that we don't necessarily seem to have a strong theory as to why this 'translation step' would be overly hard. In particular, even without a clear 'translation step' as such in hand, the schemes would be significantly more in danger (and significantly more risky to standardize) than last week.. But it's an interesting question that seems worth asking anyway.)

Now, turning more toward the core technical matter..

It seems there are two "phenomena" that justify the improved attack paradigm.

The first is more pedestrian, but potentially key: The claimed "- typo" in the DPW 19 graph presentation.

__iii) Assuming the 'typo claim' is correct, how much does this artifact contribute toward achieving a sub-exp-time attack that hits any very small eps approximation factor? (How much does this factor into the experimental evidence?)__

The second is the most interesting question (probably): The central/key part of the new technique seems to be

(a) the generalization of units to S-units, and

(b) the observation that (at least) subexponentially more S-units seems to be able to be found.

By my understanding, the central/key intuition for the improved attack is that the presence of subexponentially more S-units leads to an improved CVP outcome in the log-S-unit lattice.

__iv) Does this intuition sound about right?__

__v) Say I am happy to assume that this /huge/ number of S-units (with nice properties) are present, is it clear that g/u (resp. gu/v) will necessarily be exceptionally short? (This is written as \eta \le n^((1/2)+o(1)) in the talk slides.)__

Regarding question (v), the point is that prior work generated vectors that were exp(n^(1/2))-approximately-short in exp(n^(1/2))-time. Now, the claim is ~n^(1/2)-short in exp(n^(1/2))-time.

So, my intuition for why this is possible is that -- due to the presence of sub-exp-many more S-units than units -- the density of points in the log-S-unit lattice is much higher than the density of points in the log-unit lattice, so CVP will have a higher quality outcome. Yes?

__vi) Do you have a formal analysis akin to this in the upcoming paper? Or are working toward it, perhaps, but believe this is the right intuition?__

Next to last, just a quick clarifying question:

__vii) My understanding is that a quantum computer speeds some steps up (from sub-exp-time to poly-time), but for every proposed step, there is a sub-exp-time classical algorithm, as well as a few steps where (currently) we only know sub-exp-time algorithms, so the quantum computer doesn't give a significant /asymptotic/ speed-up, as-is (ignoring future algorithm progress). Is that right?__

Finally:

__viii) What is the memory complexity of the attacker (perhaps just a quick 'asymptotic' statement)? Is it exp(n^(1/2)) * poly(n)?__

Thanks so much, Dan. Happy to discuss further.

Also looking forward to any comments from other people in the community (likely smarter than I) on this attack vector.

Cheers,

--Daniel

This seems like a very excellent talk, and very timely.

Apologies for the slow reply; I've been digesting the material over the weekend. Two things:

1) It's worth pointing out first that, assuming the claims are verified and accepted (more or less as-is) by the community, the impact is a very significant bit-security loss (or at least an extremely significant step toward a very significant bit-security loss..) for the following NIST PQC R3 Finalists:

- Kyber

- Saber

- NTRU

- Dilithium

- Falcon

As such, these results would (potentially but inherently) majorly affect the final decision process for NIST PQC before standardization decisions are made.

To wit, schemes in prior rounds have been excluded from progressing to subsequent rounds due to weaker attacks than the claimed attack here.

2) While people are likely still in the process of analyzing the claimed outcomes, I'd like to ask a few clarifying questions to help everyone out (enumerated below with i), ii), etc.)..

The ultimate claim is: "eps-Approx-SVP in Power-of-2 Ideal Lattices can be solved in classical time ~exp(n^(1/2))" for (essentially) any very small eps.

If so, this seems to not 'invalidate' the reduction-based security proofs for Power-of-2 Cyclotomic style cryptosystems, but rather it makes the consequence of such reductions for real world security.. well, mostly ineffectual (even worthless?).

E.g. "my cryptosystem is at least as hard to break as This Problem; however, This Problem turned out to be much easier than I expected"

(It is worth noting that we don't necessarily seem to have a strong theory as to why this 'translation step' would be overly hard. In particular, even without a clear 'translation step' as such in hand, the schemes would be significantly more in danger (and significantly more risky to standardize) than last week.. But it's an interesting question that seems worth asking anyway.)

Now, turning more toward the core technical matter..

It seems there are two "phenomena" that justify the improved attack paradigm.

The first is more pedestrian, but potentially key: The claimed "- typo" in the DPW 19 graph presentation.

The second is the most interesting question (probably): The central/key part of the new technique seems to be

(a) the generalization of units to S-units, and

(b) the observation that (at least) subexponentially more S-units seems to be able to be found.

By my understanding, the central/key intuition for the improved attack is that the presence of subexponentially more S-units leads to an improved CVP outcome in the log-S-unit lattice.

Regarding question (v), the point is that prior work generated vectors that were exp(n^(1/2))-approximately-short in exp(n^(1/2))-time. Now, the claim is ~n^(1/2)-short in exp(n^(1/2))-time.

So, my intuition for why this is possible is that -- due to the presence of sub-exp-many more S-units than units -- the density of points in the log-S-unit lattice is much higher than the density of points in the log-unit lattice, so CVP will have a higher quality outcome. Yes?

Next to last, just a quick clarifying question:

Finally:

Thanks so much, Dan. Happy to discuss further.

Also looking forward to any comments from other people in the community (likely smarter than I) on this attack vector.

Cheers,

--Daniel

Aug 24, 2021, 12:35:34 AM8/24/21

to pqc-forum, daniel.apon, D. J. Bernstein, pqc-...@list.nist.gov

P.S. Oh, a bonus question too:

RE Slide 35 on \Lambda'' where the big 'improvement factor' happens--

Is it clear that adjoining square roots is restricted to a particular type of structured lattice; e.g. only Power-of-2 Cyclotomic, or only Cyclotomic? If so, could you explain for everyone to understand the basic idea?

RE Slide 35 on \Lambda'' where the big 'improvement factor' happens--

Is it clear that adjoining square roots is restricted to a particular type of structured lattice; e.g. only Power-of-2 Cyclotomic, or only Cyclotomic? If so, could you explain for everyone to understand the basic idea?

Aug 24, 2021, 1:14:28 AM8/24/21

to daniel.apon, D. J. Bernstein, pqc-...@list.nist.gov

On Mon, Aug 23, 2021 at 9:35 PM 'daniel.apon' via pqc-forum

<pqc-...@list.nist.gov> wrote:

>

> P.S. Oh, a bonus question too:

>

> RE Slide 35 on \Lambda'' where the big 'improvement factor' happens--

>

> Is it clear that adjoining square roots is restricted to a particular type of structured lattice; e.g. only Power-of-2 Cyclotomic, or only Cyclotomic? If so, could you explain for everyone to understand the basic idea?

I'm not one of the authors, but the basic idea is that in cyclotomics
<pqc-...@list.nist.gov> wrote:

>

> P.S. Oh, a bonus question too:

>

> RE Slide 35 on \Lambda'' where the big 'improvement factor' happens--

>

> Is it clear that adjoining square roots is restricted to a particular type of structured lattice; e.g. only Power-of-2 Cyclotomic, or only Cyclotomic? If so, could you explain for everyone to understand the basic idea?

there are many S-units that are easy to write down. However the S-unit

group has additional units. One way to extend is to find squares and

adjoin square roots that also land in the field. Under the usual

conjectures (RH of some sort) this is not that hard to do: e.g. all

but 1/8th of integers that are 1 mod 2, 1 mod 3, and 4 mod 5 are

squares, which makes most squareness sensing easy.

Sincerely,

Watson Ladd

--

Astra mortemque praestare gradatim

Aug 24, 2021, 1:59:44 AM8/24/21

to daniel.apon, D. J. Bernstein, pqc-...@list.nist.gov

Hi all,

On Tue, Aug 24, 2021 at 6:29 AM 'daniel.apon' via pqc-forum <pqc-...@list.nist.gov> wrote:

Hi Dan (speaking for myself),

This seems like a very excellent talk, and very timely.

Apologies for the slow reply; I've been digesting the material over the weekend. Two things:

1) It's worth pointing out first that, assuming the claims are verified and accepted (more or less as-is) by the community, the impact is a very significant bit-security loss (or at least an extremely significant step toward a very significant bit-security loss..) for the following NIST PQC R3 Finalists:

- Kyber

- Saber

- NTRU

- Dilithium

- Falcon

The talk is about improved algorithms for Ideal-SVP -- continuing a very interesting line of work spanning Campbell/Groves/Shepherd (2014) up to Ducas/Plancon/Wesolowski (2019) which gave (quantum) polynomial time, sub-exponential approximations for Ideal-SVP. At this point, however, there is no known reduction from any of the lattice finalists/alternates to this problem. Furthermore, the asymptotic worst-case to average-case reductions of Kyber, Saber, Dilithium (whatever these reductions are worth in practice) are not from Ideal-SVP either; they are from Module-SIVP, which at this point is not known to be easier than SIVP for general lattices, following Langlois/Stehle (2012).

Best,

Vadim

ist.nist.gov/d/msgid/pqc-forum/ab0e90fb-6b21-4242-9d44-0e65818fa1f8n%40list.nist.gov.

Aug 24, 2021, 9:50:56 AM8/24/21

to Vadim Lyubashevsky, daniel.apon, D. J. Bernstein, pqc-...@list.nist.gov

To put a finer point on it: I have not seen the authors claim that the presented algorithm yields *any* potential bit-security loss against *any* of the NISTPQC R3 proposals.

Sincerely yours in cryptography,

Chris

Aug 24, 2021, 10:45:56 AM8/24/21

to pqc-forum, D. J. Bernstein, pqc-...@list.nist.gov, asil...@uci.edu, eise...@math.psu.edu, ta...@hyperelliptic.org, kru...@uci.edu, christine.va...@nxp.com

Dear Dan, dear PQC-forum

(cc: Kirsten, Tanja, Karl, Alice, and Christine)

TL;DR:

- We thank the authors for the bug report on [DPW19]. We repeat the

already stated limitations of [DPW19].

- We note that conjecture of slide 47 (a subexp_{1/2} algorithm for

(1+ε)-Ideal-SVP) is given absolutely no justification or explanation.

- With what is given, and standard heuristic, we infer that the proposed

algorithm is no better than existing literature [PHS19].

- We invoke Sagan's standard.

First, I (Leo) would like to thank you for spotting the "graphing typo"

from our paper [DPW19]. It has now been corrected on the eprint.

I'm also jumping on the opportunity to point at the stated limitation of

our lower bound to specific optimization to the poly-time algorithm

obtained by combining [EHKS14]+[CDPR16]+[BS17]+[CDW17]. Indeed, we

already acknowledge in the paper that S-unit approach [PHS19] with

sub-exponential complexity already violate this lower bound. This is

explicit in intro and again in section 7.4. Breaking it is unsurprising and

not new.

Thank you also for sharing your interesting work in progress. The new

way of constructing S-units sounds quite interesting.

However, we (Leo and Alice) are surprised to see a sudden jump to an

extraordinary conjecture on slide 47: a subexp_{1/2} algorithm for

(1+ε)-Ideal-SVP. No justification or explanation is given. The rest of

the talk includes irrelevant experiments in dimension 32 and 64. These

experiments concern a different algorithm than the one of slide 47. In

particular, the dimension 64 experiment brute-force collisions in the

class group: this would unavoidably scale super-exponentially in time

and memory just because of the cardinality growth of this group.

Our analysis of the algorithm of slide 47 suggests that it leads to a

sub-exponential factor of at least subexp_{1/4}(n), similar to prior

claims [PHS19]. Indeed, this algorithm seems to follow [PHS19] with the

following variations:

- a different construction for the set of S-units (precomputation)

- the use of d = subexp_{1/2}(n) many places rather than O(n)

In particular, the main loop appears to be essentially the Laarhoven

[Laa16] algorithm, written multiplicatively. In the literature, the key

remark for analyzing how often Log(u/v) reduces Log(g) is the following

fact:

in dimension d, if x,y are uniform on spheres of radii a, b respectively

(with a > b), then the probability that y reduces x (i.e., ||x-y|| <

||x||) is (1-(b/2a)^2)^{Θ(d)}.

According to this heuristic, the probability of reducing log(g) to a

size comparable to Log(u/v) would be *ridiculously* small : exp(-Θ(d)) =

exp(-subexp(n)). This heuristic might need refinements given the integer

and sparse nature of the Log on the finite places. Yet, even just

projecting on the O(n) infinite places, we do not see how your algorithm

could find something smaller than Ω(n^1/4) * ||Log(u/v)|| for the given

parameters. That leads to an approximation factor of at least

subexp_{1/4}(n), and maybe much more because of the huge amount of

ignored finite places. That is no better than [PHS19] for the same

online running time.

Of course, this may miss subtleties not presented on your slide.

But after all, the burden of the proof lies on your side.

Extraordinary claims require extraordinary evidence.

All the best

-- Léo Ducas & Alice Pellet-Mary

(cc: Kirsten, Tanja, Karl, Alice, and Christine)

TL;DR:

- We thank the authors for the bug report on [DPW19]. We repeat the

already stated limitations of [DPW19].

- We note that conjecture of slide 47 (a subexp_{1/2} algorithm for

(1+ε)-Ideal-SVP) is given absolutely no justification or explanation.

- With what is given, and standard heuristic, we infer that the proposed

algorithm is no better than existing literature [PHS19].

- We invoke Sagan's standard.

First, I (Leo) would like to thank you for spotting the "graphing typo"

from our paper [DPW19]. It has now been corrected on the eprint.

I'm also jumping on the opportunity to point at the stated limitation of

our lower bound to specific optimization to the poly-time algorithm

obtained by combining [EHKS14]+[CDPR16]+[BS17]+[CDW17]. Indeed, we

already acknowledge in the paper that S-unit approach [PHS19] with

sub-exponential complexity already violate this lower bound. This is

explicit in intro and again in section 7.4. Breaking it is unsurprising and

not new.

Thank you also for sharing your interesting work in progress. The new

way of constructing S-units sounds quite interesting.

However, we (Leo and Alice) are surprised to see a sudden jump to an

extraordinary conjecture on slide 47: a subexp_{1/2} algorithm for

(1+ε)-Ideal-SVP. No justification or explanation is given. The rest of

the talk includes irrelevant experiments in dimension 32 and 64. These

experiments concern a different algorithm than the one of slide 47. In

particular, the dimension 64 experiment brute-force collisions in the

class group: this would unavoidably scale super-exponentially in time

and memory just because of the cardinality growth of this group.

Our analysis of the algorithm of slide 47 suggests that it leads to a

sub-exponential factor of at least subexp_{1/4}(n), similar to prior

claims [PHS19]. Indeed, this algorithm seems to follow [PHS19] with the

following variations:

- a different construction for the set of S-units (precomputation)

- the use of d = subexp_{1/2}(n) many places rather than O(n)

In particular, the main loop appears to be essentially the Laarhoven

[Laa16] algorithm, written multiplicatively. In the literature, the key

remark for analyzing how often Log(u/v) reduces Log(g) is the following

fact:

in dimension d, if x,y are uniform on spheres of radii a, b respectively

(with a > b), then the probability that y reduces x (i.e., ||x-y|| <

||x||) is (1-(b/2a)^2)^{Θ(d)}.

According to this heuristic, the probability of reducing log(g) to a

size comparable to Log(u/v) would be *ridiculously* small : exp(-Θ(d)) =

exp(-subexp(n)). This heuristic might need refinements given the integer

and sparse nature of the Log on the finite places. Yet, even just

projecting on the O(n) infinite places, we do not see how your algorithm

could find something smaller than Ω(n^1/4) * ||Log(u/v)|| for the given

parameters. That leads to an approximation factor of at least

subexp_{1/4}(n), and maybe much more because of the huge amount of

ignored finite places. That is no better than [PHS19] for the same

online running time.

Of course, this may miss subtleties not presented on your slide.

But after all, the burden of the proof lies on your side.

Extraordinary claims require extraordinary evidence.

All the best

-- Léo Ducas & Alice Pellet-Mary

Aug 28, 2021, 8:10:52 PM8/28/21

to pqc-forum, leo.d...@gmail.com, D. J. Bernstein, pqc-...@list.nist.gov, asil...@uci.edu, eise...@math.psu.edu, ta...@hyperelliptic.org, kru...@uci.edu, christine.va...@nxp.com

Hello, Dan and all (speaking in my official capacity),

Thank you for pointing out your talk at the recent SIAM workshop.

Upon taking the week to review the material presented and after listening to the public community on this issue, the NIST PQC team judges as follows:

__This work does not impact the ongoing standardization process as-is.__

In particular, as a matter of science, there remain two barriers between this line of work and any impact on the bit-security of NIST PQC Finalists.

We also remain unaware of any serious attempt to bridge the gap between Rank-1 ModuleSVP and Rank-2 ModuleSVP.

To wit, it does not seem obvious that this line of work will reduce the bit-security of any Round 3 PQC candidate within the next few months, nor even the next few years.

Of course, we (and the public) remain interested in any research paper that explicitly lays out its analysis in a verifiable manner.

Further, the NIST PQC team re-expresses its displeasure that the ideas underlying this talk were apparently around since May 2020, and despite various, significant attempts to ask for a public disclosure of standards-relevant technical content (claimed in private communication to us back), the speaker in this talk reserved approximately**480 days** before presenting his material publicly (nor did he retract his claims of large-scale cryptanalytic outcomes over those 480 days).

Best,

--Daniel Apon, NIST PQC

Thank you for pointing out your talk at the recent SIAM workshop.

Upon taking the week to review the material presented and after listening to the public community on this issue, the NIST PQC team judges as follows:

In particular, as a matter of science, there remain two barriers between this line of work and any impact on the bit-security of NIST PQC Finalists.

- First, according to the conjectured research program in which this work resides, one would need to find shortest (or approximately shortest vectors) in ideal lattices. Specifically, one would need to solve the Approximate-IdealSVP problem for an extremely small approximation factor.

This is what is explicitly claimed within this talk. However, proper evidence is not provided.

Is it is not present in the talk, nor is it present in public discourse.

Naturally, both the NIST PQC team as well as the public waits for sufficient evidence (such as a paper with actual analysis) to substantiate the claim made in the final slide of the talk.

However, until such occurs,__the claim is not proper.__ - Even given an algorithm that efficiently solves IdealSVP or eps-Approx-IdealSVP for arbitrarily small eps, there remains a second barrier to this line of work.

IdealSVP is essentially equivalent to Rank-1 ModuleSVP.

Systems such as (the Round 2 candidate) NewHope, based on Ring-LWE, are more akin to Rank-2 ModuleSVP. (And an analogous situation holds for the NIST PQC Round 3 Finalists.)

For many years (almost a decade now, more or less), there has been a known gap between algorithms for Rank-1 ModuleSVP and algorithms for Rank-k ModuleSVP for k >= 2.

We also remain unaware of any serious attempt to bridge the gap between Rank-1 ModuleSVP and Rank-2 ModuleSVP.

To wit, it does not seem obvious that this line of work will reduce the bit-security of any Round 3 PQC candidate within the next few months, nor even the next few years.

Of course, we (and the public) remain interested in any research paper that explicitly lays out its analysis in a verifiable manner.

Further, the NIST PQC team re-expresses its displeasure that the ideas underlying this talk were apparently around since May 2020, and despite various, significant attempts to ask for a public disclosure of standards-relevant technical content (claimed in private communication to us back), the speaker in this talk reserved approximately

Best,

--Daniel Apon, NIST PQC

Aug 28, 2021, 10:07:08 PM8/28/21

to daniel.apon, leo.d...@gmail.com, D. J. Bernstein, pqc-...@list.nist.gov, asil...@uci.edu, eise...@math.psu.edu, ta...@hyperelliptic.org, kru...@uci.edu, christine.va...@nxp.com

On Sun, Aug 29, 2021 at 12:10 AM 'daniel.apon' via pqc-forum

<pqc-...@list.nist.gov> wrote:

> Systems such as (the Round 2 candidate) NewHope, based on Ring-LWE, are more akin to Rank-2 ModuleSVP. (And an analogous situation holds for the NIST PQC Round 3 Finalists.)

Is there a specific reason that NIST has chosen to use a Round 2
<pqc-...@list.nist.gov> wrote:

> Systems such as (the Round 2 candidate) NewHope, based on Ring-LWE, are more akin to Rank-2 ModuleSVP. (And an analogous situation holds for the NIST PQC Round 3 Finalists.)

candidate as the example here rather than a finalist?

An "Analogous situation" with unspecified candidates is rather

unfalsifiable, and since NewHope is not a finalist or an alternate its

behavior is not that interesting... Except that it is interesting if

proposals which have better arguments for security against particular

attacks were already eliminated for various reasons.

Aug 28, 2021, 10:19:31 PM8/28/21

to pqc-forum, gmax...@gmail.com, leo.d...@gmail.com, D. J. Bernstein, pqc-...@list.nist.gov, asil...@uci.edu, eise...@math.psu.edu, ta...@hyperelliptic.org, kru...@uci.edu, christine.va...@nxp.com, daniel.apon

Hi (just a quick reply),

The reason from that choice is from the theoretical study, but fairly simple, and I'm happy to explain:

Ring-LWE can be viewed as Rank-1 Module-LWE (with ring dimension n).

Plain-LWE can be viewed as Rank-n Module-LWE (with ring dimension 1).

A key recovery attack against Ring-LWE type systems (resp. Rank-1 Module-LWE type systems) can be viewed as an attack against Rank-2 ModuleSVP.

A key recovery attack against Rank-2 Module-LWE type systems can be viewed as an attack against Rank-3 ModuleSVP.

For more information you can see the discussion in the introduction of [DPW19] (cited as the work with the typo in the talk); c.f. https://eprint.iacr.org/2019/234.pdf. See bullet point 2, top of page 3.

Similarly, I would point out an independent work from CRYPTO 2020 (just as another off-hand example); see [MS-D20] at https://eprint.iacr.org/2019/1142.pdf.

As far as I'm aware (or rather, we the NIST team are aware), this rank-1 vs rank-(2+) gap for (approx)SVP is known, very heavily cited throughout the academic research literature, and not a contested issue at this time.

----

TL;DR: Since the technical gap occurs between Rank-1 (IdealSVP) and Rank-2 (ModuleSVP), it's easier to describe a Ring-LWE scheme rather than a Module-LWE scheme such as Saber or Kyber (which, at Category 1, is more of a Rank-3 approx-ModuleSVP problem).

All that said (hopefully as informative discussion), it is reasonable still to point out at the NTRU candidate is more of a Rank-2 style of scheme (and the example could have been given with NTRU instead of NewHope). I appreciate you pointing it out. =)

--Daniel

The reason from that choice is from the theoretical study, but fairly simple, and I'm happy to explain:

Ring-LWE can be viewed as Rank-1 Module-LWE (with ring dimension n).

Plain-LWE can be viewed as Rank-n Module-LWE (with ring dimension 1).

A key recovery attack against Ring-LWE type systems (resp. Rank-1 Module-LWE type systems) can be viewed as an attack against Rank-2 ModuleSVP.

A key recovery attack against Rank-2 Module-LWE type systems can be viewed as an attack against Rank-3 ModuleSVP.

For more information you can see the discussion in the introduction of [DPW19] (cited as the work with the typo in the talk); c.f. https://eprint.iacr.org/2019/234.pdf. See bullet point 2, top of page 3.

Similarly, I would point out an independent work from CRYPTO 2020 (just as another off-hand example); see [MS-D20] at https://eprint.iacr.org/2019/1142.pdf.

As far as I'm aware (or rather, we the NIST team are aware), this rank-1 vs rank-(2+) gap for (approx)SVP is known, very heavily cited throughout the academic research literature, and not a contested issue at this time.

----

TL;DR: Since the technical gap occurs between Rank-1 (IdealSVP) and Rank-2 (ModuleSVP), it's easier to describe a Ring-LWE scheme rather than a Module-LWE scheme such as Saber or Kyber (which, at Category 1, is more of a Rank-3 approx-ModuleSVP problem).

All that said (hopefully as informative discussion), it is reasonable still to point out at the NTRU candidate is more of a Rank-2 style of scheme (and the example could have been given with NTRU instead of NewHope). I appreciate you pointing it out. =)

--Daniel

Aug 28, 2021, 10:33:17 PM8/28/21

to pqc-forum, daniel.apon, gmax...@gmail.com, leo.d...@gmail.com, D. J. Bernstein, pqc-...@list.nist.gov, asil...@uci.edu, eise...@math.psu.edu, ta...@hyperelliptic.org, kru...@uci.edu, christine.va...@nxp.com

P.S. I believe it is important to make these issues clear for two reasons:

1) Setting a clear standard for what would constitute a break of the theory of security of the standardization candidates

2) As an explanatory discussion for why the experiments in the presented talk spawning this discussion thread were experiments for a particular type of lattice problem and not (rather) experiments for key recovery attacks against (even, toy-scale) cryptosystems

1) Setting a clear standard for what would constitute a break of the theory of security of the standardization candidates

2) As an explanatory discussion for why the experiments in the presented talk spawning this discussion thread were experiments for a particular type of lattice problem and not (rather) experiments for key recovery attacks against (even, toy-scale) cryptosystems

Aug 28, 2021, 11:29:23 PM8/28/21

to daniel.apon, D. J. Bernstein, asil...@uci.edu, christine.va...@nxp.com, eise...@math.psu.edu, gmax...@gmail.com, kru...@uci.edu, leo.d...@gmail.com, pqc-forum, ta...@hyperelliptic.org

Obviously speaking for myself as I haven’t been at NIST for 2 years:

Ring-LWE (in the form used by all NIST candidate schemes) should properly be viewed as an instance of rank 2 module LWE with ring dimension n,

but a (possibly) easier instance of it as

the (perp) lattice contains a very short vector (s, -1), far shorter than a random lattice of that form would be likely to contain

This doesn’t affect the larger point that the result of Bernstein et al may well cast (further) doubts on whether the reduction in LPR10 is even “qualitatively” meaningful, but isn’t particularly relevant to any NIST candidate schemes, as reductions to forms of SVP have never been used to set scheme parameters.

Nor has any evidence been presented (to my knowledge) to suggest there isn’t a “phase shift” in hardness between rank 1 and rank 2; any evidence that rank 2 might not be much harder than rank 1 would indeed cast doubt on various structured LWE crypto systems

A question I haven’t thought about but that should be answered or attempted as it might shed light on the subject

GapSVP is

* easy for rank 1 modules ring dimension n in the canonical embedding (immediate application of Minkowskis theorem and the fact that the n shortest independent vectors are all the same length). see slide 13 here

* almost certainly hard for rank n modules with ring dimension 1

Where does it stand for, say, rank 2 modules in ring dimension n/2?

If the answer is “still easy” (I’m not aware of any algorithm to do so) that might cause some doubt!

--

You received this message because you are subscribed to the Google Groups "pqc-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/db78e5a4-de5b-458f-8823-1094ad13a7bbn%40list.nist.gov.

-Jacob Alperin-Sheriff

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu