1778 views

Skip to first unread message

Nov 9, 2020, 8:35:28 PM11/9/20

to pqc-co...@nist.gov, pqc-forum

There is a strong possibility that NIST intend to standardize Classic McEliece at the

end of Round 3 as a conservative option. Their Status Report on the Second Round

expressed confidence in its construction and said that they believed it could be ready

for standardization if selected. Standardizing Classic McEliece, however, also means

standardizing at least some of the parameter sets proposed by the submitters.

end of Round 3 as a conservative option. Their Status Report on the Second Round

expressed confidence in its construction and said that they believed it could be ready

for standardization if selected. Standardizing Classic McEliece, however, also means

standardizing at least some of the parameter sets proposed by the submitters.

I am filing this comment to request that the Classic McEliece submitters justify the

claimed security categories for their parameters.

claimed security categories for their parameters.

The Round 3 submission does not include any concrete analysis of the security provided

by the proposed parameter sets. There is a lot of hype about the asymptotic result from

Canto Torres and Sendrier but this is ultimately irrelevant for any finite choice of

parameters. The only firm statement contained in the submission is:

by the proposed parameter sets. There is a lot of hype about the asymptotic result from

Canto Torres and Sendrier but this is ultimately irrelevant for any finite choice of

parameters. The only firm statement contained in the submission is:

"[Bernstein, Lange and Peters. 2008] reported that its attack uses 2^266.94 bit

operations to break the (13,6960,119) parameter set. Subsequent ISD variants

have reduced the number of bit operations considerably below 2^256."

operations to break the (13,6960,119) parameter set. Subsequent ISD variants

have reduced the number of bit operations considerably below 2^256."

The submission argues that any reduction in classical security from improved ISD attacks

will be offset by the increased memory requirements. While this may be true for some ISD

variants such as BJMM, they provide no analysis to back this up. It also ignores other

ISD variants such as Stern that need significantly less memory. In this case the finite

regime analsysis from the LEDACrypt team gives the following estimates of computational

and memory costs for the pre-merger Classic McEliece and NTS-KEM parameters:

will be offset by the increased memory requirements. While this may be true for some ISD

variants such as BJMM, they provide no analysis to back this up. It also ignores other

ISD variants such as Stern that need significantly less memory. In this case the finite

regime analsysis from the LEDACrypt team gives the following estimates of computational

and memory costs for the pre-merger Classic McEliece and NTS-KEM parameters:

Parameter Set Compute Memory Target

mceliece-3488-064 152.51 34.68 143

mceliece-4608-096 194.36 35.66 207

mceliece-6688-128 270.46 37.48 272

mceliece-6960-119 271.18 47.58 272

mceliece-8192-128 306.63 67.64 272

nts-kem-4096-064 166.76 35.60 143

nts-kem-8192-080 248.01 77.63 207

nts-kem-8192-136 313.52 67.50 272

mceliece-3488-064 152.51 34.68 143

mceliece-4608-096 194.36 35.66 207

mceliece-6688-128 270.46 37.48 272

mceliece-6960-119 271.18 47.58 272

mceliece-8192-128 306.63 67.64 272

nts-kem-4096-064 166.76 35.60 143

nts-kem-8192-080 248.01 77.63 207

nts-kem-8192-136 313.52 67.50 272

By this analysis:

- The mceliece-4608-096 parameters are about 13 bits below Category 3 with an attack

that needs slightly over 6 GiB of memory.

that needs slightly over 6 GiB of memory.

- The mceliece-6688-128 parameters are borderline Category 5 with an attack that needs

slightly over 22 GiB of memory.

slightly over 22 GiB of memory.

- The mceliece-6960-119 parameters are borderline Category 5 with an attack that needs

around 24 TiB of memory.

around 24 TiB of memory.

If Classic McEliece is to be standardized as a conservative option then the parameter sets

that are standardized with it should also be chosen conservatively. The NTS-KEM parameters

were. Three out of the five Classic McEliece parameters were not.

that are standardized with it should also be chosen conservatively. The NTS-KEM parameters

were. Three out of the five Classic McEliece parameters were not.

Kirk

Nov 11, 2020, 12:20:08 PM11/11/20

to pqc-forum, pqc-co...@nist.gov

Kirk Fleming wrote:

> In this case the finite regime analysis from the LEDACrypt team gives the following

> estimates of computational and memory costs for the pre-merger Classic McEliece

> and NTS-KEM parameters:

It was pointed out that my finite regime analysis reference was unclear. The figures were

taken from Tables 4 and 5 in Baldi, Barenghu, Chiaraluce, Pelosi and Santini, "A finite

regime analysis of Information Set Decoding algorithms", Algorithms 12, no. 10 (2019).

The paper can be found at https://www.mdpi.com/1999-4893/12/10/209/pdf.

Kirk

Nov 19, 2020, 7:38:15 AM11/19/20

to pqc-co...@nist.gov, pqc-forum

Kirk Fleming writes:

> mceliece-3488-064 152.51 34.68 143

> mceliece-4608-096 194.36 35.66 207

> mceliece-6688-128 270.46 37.48 272

> mceliece-6960-119 271.18 47.58 272

> mceliece-8192-128 306.63 67.64 272

We agree that the second column is sometimes less than the fourth column.
> mceliece-3488-064 152.51 34.68 143

> mceliece-4608-096 194.36 35.66 207

> mceliece-6688-128 270.46 37.48 272

> mceliece-6960-119 271.18 47.58 272

> mceliece-8192-128 306.63 67.64 272

However, the second column ignores the huge overheads measured by the

third column. As stated in the submission:

A closer look shows that the attack in [11] is bottlenecked by random

access to a huge array (much larger than the public key being

attacked), and that subsequent ISD variants use even more memory. The

same amount of hardware allows much more parallelism in attacking,

e.g., AES-256.

The submission goes on to state expectations regarding (e.g.) 6960119

being more expensive to break than AES-256. The numbers you quote (e.g.,

2^271.18 operations on 2^47.58 bits of RAM) are consistent with these

expectations.

The literature shows that optimized hardware for AES attacks has similar

efficiency to multipliers. The multipliers in an Intel CPU core carry

out millions of bit operations in the same time that it takes the core

to retrieve data from (say) 6GB of RAM, never mind RAM contention across

cores. GPUs and TPUs show the costs of RAM even more clearly. On a

larger scale, decades of supercomputing literature consistently fit a

square-root scaling of RAM costs, and science-fiction 3D computers

(which seem much harder to build than quantum computers) would still

have cube-root costs.

> I am filing this comment to request that the Classic McEliece submitters

> justify the claimed security categories for their parameters.

literature don't already justify the assignments in the submission?

Our best guess is that you're assuming that NIST requires >=2^272

operations in a specified metric for "category 5", and that 2^271.18 is

an evaluation in this metric. But the assumption here isn't correct.

NIST has not issued clear definitions of the cost metrics for its

"categories". If at some point NIST does issue clear, stable definitions

of its cost metrics, then it should also allow all submitters to set

their "category" assignments accordingly.

Note that one can make any cryptosystem sound much easier to break by

choosing a cost metric that assigns unrealistically low cost to

operations used in attacks; RAM access is just one example. If the same

operations are not bottlenecks in attacks against AES-m or SHA-n then

this can reverse comparisons to AES-m or SHA-n respectively.

> The Round 3 submission does not include any concrete analysis of the

> security provided by the proposed parameter sets.

break than AES-256" is concrete, and the rationale is stated. A more

detailed analysis depends on the cost metric.

> There is a lot of hype about the asymptotic result from Canto Torres

> and Sendrier

asymptotic lattice security levels against attacks known 10 years ago

were 42% higher than lattice security levels against attacks known today

(for otherwise unbroken lattice systems), while for McEliece's system

the change is 0%. Any thorough evaluation of security risks will take

these facts into account, along with other risk indicators.

> but this is ultimately irrelevant for any finite choice of parameters.

decoding, concretely" section:

We emphasize that o(1) does not mean 0: it means something that

converges to 0 as n->infty. More detailed attack-cost evaluation is

therefore required for any particular parameters.

The section goes on to summarize what the literature says about concrete

cost analyses, and ends with the AES-256 comparison reviewed above.

> The submission argues that any reduction in classical security from

> improved ISD attacks will be offset by the increased memory

> requirements. While this may be true for some ISD variants such as

> BJMM, they provide no analysis to back this up.

A closer look shows that the attack in [11] is bottlenecked by random

access to a huge array (much larger than the public key being

attacked), and that subsequent ISD variants use even more memory. The

same amount of hardware allows much more parallelism in attacking,

e.g., AES-256.

The exact impact depends on the cost metric.

> It also ignores other

> ISD variants such as Stern that need significantly less memory.

above) is a Stern-type attack. The submission's comment that "subsequent

ISD variants use even more memory" (see quote above) is pretty much the

same as your comment that Stern-type attacks "need significantly less

memory".

> - The mceliece-4608-096 parameters are about 13 bits below Category 3 with

> an attack that needs slightly over 6 GiB of memory.

"Category 3". If NIST settles on a definition to be used in NISTPQC then

submitters will be able to make "category" assignments accordingly.

See above regarding the actual costs of accessing 6GB of RAM.

> If Classic McEliece is to be standardized as a conservative option

> then the parameter sets that are standardized with it should also be

> chosen conservatively. The NTS-KEM parameters were. Three out of the

> five Classic McEliece parameters were not.

2^270 vs. 2^272, and 2^271 vs. 2^272 discussed above?

If you're concerned about 2^270 RAM operations vs. 2^272 bit operations

then you should be far more concerned about NIST dropping the targets

from 2^256 to 2^128:

* Multi-target attacks turn 2^128 into something already feasible for

large-scale attackers today. ECC is an exception to this (provably,

via a tight self-reduction), but codes and lattices and AES aren't

exceptions. See https://blog.cr.yp.to/20151120-batchattacks.html.

* Both round-1 Classic McEliece options were at the 2^256 level.

However, NIST then praised NTS-KEM for offering lower-security

options, and asked Classic McEliece for "parameter sets for other

security categories".

From this perspective, NTS-KEM and NIST were far less conservative than

Classic McEliece was. Note that general trends in networking and in CPUs

are making the high-security Classic McEliece parameter sets affordable

for more and more users.

There's also a tremendous amount to say about the dangers of unnecessary

complexity. One aspect of this is specifying more parameter sets than

necessary. Of course, NIST's request for lower-security parameter sets

was an offer that Classic McEliece couldn't refuse, but we continue to

recommend the 2^256 parameter sets. Within those parameter sets:

* The 6960119 parameter set goes back to 2008, maximizing security

for 1MB keys. This has held up just fine, and can reasonably be

viewed as the default choice.

* The new 6688128 parameter set is a slight variant that requires n

and t to be multiples of 32, which _might_ be slightly better for

formal verification, but the jury is still out on this. (Components

are being verified---see https://cr.yp.to/papers.html#controlbits

for the latest news---but this work hasn't yet reached the

components that could interact with the multiple-of-32 question.)

* The 8192128 parameter set is bigger, but the 6960119 and 6688128

parameter sets include an extra defense explained in the

submission. People paranoid enough to imagine 2^306 vs. 2^270

making a difference should also be paranoid enough to imagine this

defense making a difference.

One can easily write down even larger parameter sets. Also, the ISD

attack analyses in the literature are precise and straightforward, and

it should be easy to evaluate the security of any desired parameters in

any well-defined metric. However, the right order of events is, first,

for NIST to fully define the cost metric to be used for "categories",

and, second, for all submission teams to evaluate costs in this metric.

---Dan (on behalf of the Classic McEliece team)

Nov 23, 2020, 8:01:15 PM11/23/20

to pqc-co...@nist.gov, pqc-forum

Dan Bernstein wrote:

> Kirk Fleming writes:

>>

>> mceliece-3488-064 152.51 34.68 143

>> mceliece-4608-096 194.36 35.66 207

>> mceliece-6688-128 270.46 37.48 272

>> mceliece-6960-119 271.18 47.58 272

>> mceliece-8192-128 306.63 67.64 272

>

> We agree that the second column is sometimes less than the fourth column.

> However, the second column ignores the huge overheads measured by the

> third column.

> Kirk Fleming writes:

>>

>> mceliece-3488-064 152.51 34.68 143

>> mceliece-4608-096 194.36 35.66 207

>> mceliece-6688-128 270.46 37.48 272

>> mceliece-6960-119 271.18 47.58 272

>> mceliece-8192-128 306.63 67.64 272

>

> We agree that the second column is sometimes less than the fourth column.

> However, the second column ignores the huge overheads measured by the

> third column.

No. The second column includes a logarithmic memory overhead. This is not

an accurate analysis of the cost of memory accesses but it does not ignore it.

an accurate analysis of the cost of memory accesses but it does not ignore it.

> As stated in the submission:

>

> A closer look shows that the attack in [11] is bottlenecked by random

> access to a huge array (much larger than the public key being

> attacked), and that subsequent ISD variants use even more memory. The

> same amount of hardware allows much more parallelism in attacking,

> e.g., AES-256.

>

> A closer look shows that the attack in [11] is bottlenecked by random

> access to a huge array (much larger than the public key being

> attacked), and that subsequent ISD variants use even more memory. The

> same amount of hardware allows much more parallelism in attacking,

> e.g., AES-256.

This is the second time you've used the word "huge". Let's give it some

perspective. 2^35.66 bits is less RAM than the laptop I'm using to write

this. 2^47.58 bits is the same as Amazon EC2's u-24tb1.metal instance.

Maybe the "u" stands for 'uge.

perspective. 2^35.66 bits is less RAM than the laptop I'm using to write

this. 2^47.58 bits is the same as Amazon EC2's u-24tb1.metal instance.

Maybe the "u" stands for 'uge.

> The submission goes on to state expectations regarding (e.g.) 6960119

> being more expensive to break than AES-256. The numbers you quote (e.g.,

> 2^271.18 operations on 2^47.58 bits of RAM) are consistent with these

> expectations.

> being more expensive to break than AES-256. The numbers you quote (e.g.,

> 2^271.18 operations on 2^47.58 bits of RAM) are consistent with these

> expectations.

Your submission and this reply focus almost exclusively on the

mceliece-6960-119 parameter set. What about the other four?

> The literature shows that optimized hardware for AES attacks has similar

> efficiency to multipliers. The multipliers in an Intel CPU core carry

> out millions of bit operations in the same time that it takes the core

> to retrieve data from (say) 6GB of RAM, never mind RAM contention across

> cores. GPUs and TPUs show the costs of RAM even more clearly. On a

> larger scale, decades of supercomputing literature consistently fit a

> square-root scaling of RAM costs, and science-fiction 3D computers

> (which seem much harder to build than quantum computers) would still

> have cube-root costs.

> efficiency to multipliers. The multipliers in an Intel CPU core carry

> out millions of bit operations in the same time that it takes the core

> to retrieve data from (say) 6GB of RAM, never mind RAM contention across

> cores. GPUs and TPUs show the costs of RAM even more clearly. On a

> larger scale, decades of supercomputing literature consistently fit a

> square-root scaling of RAM costs, and science-fiction 3D computers

> (which seem much harder to build than quantum computers) would still

> have cube-root costs.

No. It's not as simple as applying a square-root overhead for memory across

the board. For an interesting (but not very rigorous) example see

http://www.ilikebigbits.com/2014_04_21_myth_of_ram_1.html. The blog argues

for a square-root memory cost. If you look at the graphs this is a poor

fit in the 10 MiB to 6 GiB range.

the board. For an interesting (but not very rigorous) example see

http://www.ilikebigbits.com/2014_04_21_myth_of_ram_1.html. The blog argues

for a square-root memory cost. If you look at the graphs this is a poor

fit in the 10 MiB to 6 GiB range.

Not all memory accesses are the same. The memory overhead depends on the

number of random accesses to high-latency memory. One cost model in Ray

Perlner's slides assumes that up to 2^50 bits of memory is local with

unit cost memory accesses.

number of random accesses to high-latency memory. One cost model in Ray

Perlner's slides assumes that up to 2^50 bits of memory is local with

unit cost memory accesses.

>> I am filing this comment to request that the Classic McEliece submitters

>> justify the claimed security categories for their parameters.

>

> Can you please clarify how you think that the numbers already in the

> literature don't already justify the assignments in the submission?

>> justify the claimed security categories for their parameters.

>

> Can you please clarify how you think that the numbers already in the

> literature don't already justify the assignments in the submission?

Let's try an easier question. Can you point to the paper cited by your

submission that gives numbers for the mceliece-4608-096 parameter set

which clearly justify its assignment as category 3?

submission that gives numbers for the mceliece-4608-096 parameter set

which clearly justify its assignment as category 3?

> Our best guess is that you're assuming that NIST requires >=2^272

> operations in a specified metric for "category 5", and that 2^271.18 is

> an evaluation in this metric. But the assumption here isn't correct.

> NIST has not issued clear definitions of the cost metrics for its

> "categories". If at some point NIST does issue clear, stable definitions

> of its cost metrics, then it should also allow all submitters to set

> their "category" assignments accordingly.

>

> Note that one can make any cryptosystem sound much easier to break by

> choosing a cost metric that assigns unrealistically low cost to

> operations used in attacks; RAM access is just one example. If the same

> operations are not bottlenecks in attacks against AES-m or SHA-n then

> this can reverse comparisons to AES-m or SHA-n respectively.

> operations in a specified metric for "category 5", and that 2^271.18 is

> an evaluation in this metric. But the assumption here isn't correct.

> NIST has not issued clear definitions of the cost metrics for its

> "categories". If at some point NIST does issue clear, stable definitions

> of its cost metrics, then it should also allow all submitters to set

> their "category" assignments accordingly.

>

> Note that one can make any cryptosystem sound much easier to break by

> choosing a cost metric that assigns unrealistically low cost to

> operations used in attacks; RAM access is just one example. If the same

> operations are not bottlenecks in attacks against AES-m or SHA-n then

> this can reverse comparisons to AES-m or SHA-n respectively.

No. You are arguing that because NIST has not specified a single metric

no evaluation in any metric can be valid. The Call for Proposals actually

says (NIST's emphasis):

no evaluation in any metric can be valid. The Call for Proposals actually

says (NIST's emphasis):

"In order for a cryptosystem to satisfy one of the above security

requirements, any attack must require computational resources

comparable to or greater than the stated threshold, with respect

to *all* metrics that NIST deems to be potentially relevant to

practical security."

requirements, any attack must require computational resources

comparable to or greater than the stated threshold, with respect

to *all* metrics that NIST deems to be potentially relevant to

practical security."

NIST gives circuit size as an example of a metric and says that AES-192 key

recovery is equivalent to 2^207 classical gates in this metric. The finite

regime analysis estimates that breaking mceliece-4608-096 with Stern takes

2^194.4 classical gates including a logarithmic overhead for memory accesses

recovery is equivalent to 2^207 classical gates in this metric. The finite

regime analysis estimates that breaking mceliece-4608-096 with Stern takes

2^194.4 classical gates including a logarithmic overhead for memory accesses

to 6 GiB of RAM. Whether or not NIST deems this to be potentially relevant to

practical security is up to them.

>> The Round 3 submission does not include any concrete analysis of the

>> security provided by the proposed parameter sets.

>

> Not true. For example, the stated expectation to be "more expensive to

> break than AES-256" is concrete, and the rationale is stated. A more

> detailed analysis depends on the cost metric.

>> security provided by the proposed parameter sets.

>

> Not true. For example, the stated expectation to be "more expensive to

> break than AES-256" is concrete, and the rationale is stated. A more

> detailed analysis depends on the cost metric.

We have different ideas of what constitutes concrete analysis. Let's take

a look at what section 8.2 of your submission actually says. Don't worry.

It's not that long.

a look at what section 8.2 of your submission actually says. Don't worry.

It's not that long.

"As an example, our parameter set mceliece6960119 takes m=13, n=6960,

and t=119. This parameter set was proposed in the attack paper [11]

that broke the original McEliece parameters (10,1024,50).

and t=119. This parameter set was proposed in the attack paper [11]

that broke the original McEliece parameters (10,1024,50).

That paper reported that its attack uses 2^266.94 bit operations to

break the (13,6960,119) parameter set. Subsequent ISD variants have

reduced the number of bit operations considerably below 2^256.

However:

- None of these analyses took into account the costs of memory

access. A closer look shows that the attack in [11] is bottle-

access. A closer look shows that the attack in [11] is bottle-

necked by random access to a huge array (much larger than the

public key being attacked), and that subsequent ISD variants

use even more memory. The same amount of hardware allows much

more parallelism in attacking, e.g., AES-256.

- Known quantum attacks multiply the security level of both ISD

and AES by an asymptotic factor of 0.5 + o(1), but a closer

look shows that the application of Grover's method to ISD

suffers much more overhead in the inner loop.

and AES by an asymptotic factor of 0.5 + o(1), but a closer

look shows that the application of Grover's method to ISD

suffers much more overhead in the inner loop.

We expect that switching from a bit-operation analysis to a cost

analysis will show that this parameter set is more expensive to

break than AES-256 pre-quantum and much more expensive to break

than AES-256 post-quantum."

analysis will show that this parameter set is more expensive to

break than AES-256 pre-quantum and much more expensive to break

than AES-256 post-quantum."

That's it. No mention of the other parameter sets. No attempt to quantify

the impact of memory accesses. Just a vague statement that something

that is "considerably below 2^256" bit operations will be fine because

memory.

the impact of memory accesses. Just a vague statement that something

that is "considerably below 2^256" bit operations will be fine because

memory.

>> There is a lot of hype about the asymptotic result from Canto Torres

>> and Sendrier

>

> The graph in https://video.cr.yp.to/2020/0813/video.html shows that

> asymptotic lattice security levels against attacks known 10 years ago

> were 42% higher than lattice security levels against attacks known today

> (for otherwise unbroken lattice systems), while for McEliece's system

> the change is 0%. Any thorough evaluation of security risks will take

> these facts into account, along with other risk indicators.

>

>> but this is ultimately irrelevant for any finite choice of parameters.

>> and Sendrier

>

> The graph in https://video.cr.yp.to/2020/0813/video.html shows that

> asymptotic lattice security levels against attacks known 10 years ago

> were 42% higher than lattice security levels against attacks known today

> (for otherwise unbroken lattice systems), while for McEliece's system

> the change is 0%. Any thorough evaluation of security risks will take

> these facts into account, along with other risk indicators.

>

>> but this is ultimately irrelevant for any finite choice of parameters.

This is the irrelevant hype I mean. You are ignoring asymptotic improvements

to half-distance decoding because McEliece uses errors with sublinear weight

yet compare it with asymptotic improvements to the exact short vector problem

even though lattice schemes depend on different variants of the problem.

to half-distance decoding because McEliece uses errors with sublinear weight

yet compare it with asymptotic improvements to the exact short vector problem

even though lattice schemes depend on different variants of the problem.

> The submission already says, at the top of the "Information-set

> decoding, concretely" section:

>

> We emphasize that o(1) does not mean 0: it means something that

> converges to 0 as n->infty. More detailed attack-cost evaluation is

> therefore required for any particular parameters.

>

> The section goes on to summarize what the literature says about concrete

> cost analyses, and ends with the AES-256 comparison reviewed above.

> decoding, concretely" section:

>

> We emphasize that o(1) does not mean 0: it means something that

> converges to 0 as n->infty. More detailed attack-cost evaluation is

> therefore required for any particular parameters.

>

> The section goes on to summarize what the literature says about concrete

> cost analyses, and ends with the AES-256 comparison reviewed above.

The rest of section 8.2 is quoted above in its entirety. The literature

says much more about concrete cost analyses than this "summary".

says much more about concrete cost analyses than this "summary".

>> The submission argues that any reduction in classical security from

>> improved ISD attacks will be offset by the increased memory

>> requirements. While this may be true for some ISD variants such as

>> BJMM, they provide no analysis to back this up.

>

> Again, here's the critical quote from the submission:

>

> A closer look shows that the attack in [11] is bottlenecked by random

> access to a huge array (much larger than the public key being

> attacked), and that subsequent ISD variants use even more memory. The

> same amount of hardware allows much more parallelism in attacking,

> e.g., AES-256.

>

> The exact impact depends on the cost metric.

>

>> It also ignores other

>> ISD variants such as Stern that need significantly less memory.

>

> No, the submission doesn't ignore this. The "attack in [11]" (see quote

> above) is a Stern-type attack. The submission's comment that "subsequent

> ISD variants use even more memory" (see quote above) is pretty much the

> same as your comment that Stern-type attacks "need significantly less

> memory".

>

>> - The mceliece-4608-096 parameters are about 13 bits below Category 3 with

>> an attack that needs slightly over 6 GiB of memory.

>

> Evaluating such a claim would require a clear, stable definition of

> "Category 3". If NIST settles on a definition to be used in NISTPQC then

> submitters will be able to make "category" assignments accordingly.

>

> See above regarding the actual costs of accessing 6GB of RAM.

>> improved ISD attacks will be offset by the increased memory

>> requirements. While this may be true for some ISD variants such as

>> BJMM, they provide no analysis to back this up.

>

> Again, here's the critical quote from the submission:

>

> A closer look shows that the attack in [11] is bottlenecked by random

> access to a huge array (much larger than the public key being

> attacked), and that subsequent ISD variants use even more memory. The

> same amount of hardware allows much more parallelism in attacking,

> e.g., AES-256.

>

> The exact impact depends on the cost metric.

>

>> It also ignores other

>> ISD variants such as Stern that need significantly less memory.

>

> No, the submission doesn't ignore this. The "attack in [11]" (see quote

> above) is a Stern-type attack. The submission's comment that "subsequent

> ISD variants use even more memory" (see quote above) is pretty much the

> same as your comment that Stern-type attacks "need significantly less

> memory".

>

>> - The mceliece-4608-096 parameters are about 13 bits below Category 3 with

>> an attack that needs slightly over 6 GiB of memory.

>

> Evaluating such a claim would require a clear, stable definition of

> "Category 3". If NIST settles on a definition to be used in NISTPQC then

> submitters will be able to make "category" assignments accordingly.

>

> See above regarding the actual costs of accessing 6GB of RAM.

No. The definition of category 3 is stable. Your argument is that NIST has

not specified a single metric. Evaluating the claim only needs a metric that

NIST deems practically relevant.

not specified a single metric. Evaluating the claim only needs a metric that

NIST deems practically relevant.

For example, the cost of breaking mceliece-4608-096 using Stern with no,

logarithmic, cube-root, or square-root memory overhead is:

logarithmic, cube-root, or square-root memory overhead is:

Parameter Set Free Log Cbrt Sqrt

mceliece-4608-096: 189.2 194.4 200.3 204.6

mceliece-4608-096: 189.2 194.4 200.3 204.6

Ray Perlner's cost model slides include almost all of these metrics. None

of them give an estimate greater than 2^207 classical gates.

of them give an estimate greater than 2^207 classical gates.

>> If Classic McEliece is to be standardized as a conservative option

>> then the parameter sets that are standardized with it should also be

>> chosen conservatively. The NTS-KEM parameters were. Three out of the

>> five Classic McEliece parameters were not.

>

> Please clarify. Are you saying anything here beyond the 2^194 vs. 2^207,

> 2^270 vs. 2^272, and 2^271 vs. 2^272 discussed above?

>

> If you're concerned about 2^270 RAM operations vs. 2^272 bit operations

>> then the parameter sets that are standardized with it should also be

>> chosen conservatively. The NTS-KEM parameters were. Three out of the

>> five Classic McEliece parameters were not.

>

> Please clarify. Are you saying anything here beyond the 2^194 vs. 2^207,

> 2^270 vs. 2^272, and 2^271 vs. 2^272 discussed above?

>

> If you're concerned about 2^270 RAM operations vs. 2^272 bit operations

No, no, no. 2^189.2 classical gates does not mean 2^189.2 random memory

accesses. Not all gates involve a memory access. Not all memory accesses

are to high-latency memory. Not all high-latency memory accesses are

random. My point is that you can't just say that everything will be fine

because memory. You actually need to do the analysis.

accesses. Not all gates involve a memory access. Not all memory accesses

are to high-latency memory. Not all high-latency memory accesses are

random. My point is that you can't just say that everything will be fine

because memory. You actually need to do the analysis.

> There's also a tremendous amount to say about the dangers of unnecessary

> complexity. One aspect of this is specifying more parameter sets than

> necessary. Of course, NIST's request for lower-security parameter sets

> was an offer that Classic McEliece couldn't refuse, but we continue to

> recommend the 2^256 parameter sets.

> complexity. One aspect of this is specifying more parameter sets than

> necessary. Of course, NIST's request for lower-security parameter sets

> was an offer that Classic McEliece couldn't refuse, but we continue to

> recommend the 2^256 parameter sets.

I assume that if Classic McEliece is chosen for standardization you will

be withdrawing the parameter sets that you don't recommend for use.

be withdrawing the parameter sets that you don't recommend for use.

> One can easily write down even larger parameter sets. Also, the ISD

> attack analyses in the literature are precise and straightforward, and

> it should be easy to evaluate the security of any desired parameters in

> any well-defined metric. However, the right order of events is, first,

> for NIST to fully define the cost metric to be used for "categories",

> and, second, for all submission teams to evaluate costs in this metric.

> attack analyses in the literature are precise and straightforward, and

> it should be easy to evaluate the security of any desired parameters in

> any well-defined metric. However, the right order of events is, first,

> for NIST to fully define the cost metric to be used for "categories",

> and, second, for all submission teams to evaluate costs in this metric.

No. The right order of events is, first, submit parameter sets with security

estimates in a metric that you think is practically relevant and, second,

let eveyone else spend three rounds of the NIST process evaluating those

claims. That's what most of other submissions have done.

estimates in a metric that you think is practically relevant and, second,

let eveyone else spend three rounds of the NIST process evaluating those

claims. That's what most of other submissions have done.

> ---Dan (on behalf of the Classic McEliece team)

Kirk

Dec 7, 2020, 5:21:36 AM12/7/20

to pqc-co...@nist.gov, pqc-...@list.nist.gov

Speaking for myself in this message. As far as I can tell, the questions

specific to Classic McEliece have already been answered. There are some

broader NISTPQC process points here for NIST to address (as shown by the

inconsistent handling of different cryptosystems), and there are some

factual errors that I'll correct for the record.

As a starting point, there appears to be a misconception that NIST has

already defined metrics to be used for the NISTPQC "categories": namely,

(non-quantum and quantum) "gates" metrics, ignoring the costs of RAM.

If this were true then I would agree that all submissions have to report

security in these metrics. But it isn't true. NIST has never issued any

such definition. There are many different definitions of (non-quantum

and quantum) "gates" in the literature, with many variations (e.g.,

sometimes RAM gates, sometimes not), often producing radically different

cost numbers; NIST has never said which "gates" we're supposed to use.

The hard way to check this is to dig through the complete NISTPQC

archives and look at what NIST has written. The easy way, for people who

know some quantum algorithms, is to consider the following two facts:

* The NISTPQC call for proposals claims that known SHA3-256 collision

attacks need 2^146 "gates", with no quantum speedups.

* It's easy to find SHA3-256 collisions in only about 2^85 oracle

queries, which is in the ballpark of 2^100 quantum "gates" with the

"gates" defined in, e.g., Ambainis's famous distinctness paper and

used in many other quantum-walk papers.

To see the 2^85, apply Ambainis's algorithm to 128-bit strings

after a random prefix. Or, simpler and presumably faster, apply the

BHT (Brassard--Hoyer--Tapp) algorithm to 256-bit strings. This

means a Grover search through 170-bit inputs for hashes appearing

in a table of 2^86 hashes of other inputs.

To see the 2^100, start from the 2^85 and look at the cost of

SHA3-256 evaluation. Maybe ends up slightly over 100, but maybe

not: one can handle a large oracle cost by adjusting 170 down and

86 up. The exact number doesn't matter for anything I'll say here.

Given the gap between 2^100 and 2^146, why does the call for proposals

ignore BHT, Ambainis, etc.? Anyone who thinks that NIST has defined

"gates", and that NIST's "gates" allow free memory, will have trouble

explaining this.

It's clear how many _operations_ are used by (e.g.) BHT, but turning the

operation count into a cost number to compare to other attacks requires

defining a cost metric. Novices might think that the choice of cost

metric is a matter of "low-order details", but the gap between 2^146 and

2^100 is bigger than most of the attack advances we've seen in NISTPQC,

including advances that NIST has used to publicly justify its decisions.

It's almost as big as the gap between Kyber-768 and what the New Hope

paper calls "JarJar". It's bigger than the pre-quantum gap between

RSA-2048 and RSA-1024.

Once upon a time, NIST cited https://cr.yp.to/papers.html#collisioncost

as justification for ignoring BHT. But that paper doesn't use a "gates"

metric, so this doesn't salvage the idea that NIST has defined a "gates"

metric. That paper also takes RAM costs into account in exponents, which

is the opposite direction from treating RAM as free.

Ray Perlner claimed in August 2020 that quantum RAM gates are "much less

realistic" than non-quantum RAM gates. This sounds very wrong to me---

cost 1 for an unlimited-size array lookup is not reality, and will never

be reality, whether quantum or not---but in any case it doesn't resolve

the more basic problem of _not having a definition_.

Where's the definition that says exactly which gates are included in

NIST's "gates"? If the situation is supposed to be that NIST's "quantum

gates" exclude RAM gates while, bizarrely, NIST's "classical gates"

_include_ RAM gates, then we should all be able to confidently see this

by looking at NIST's definition. But there is no definition!

The fact that NIST has never defined a cost metric to be used in NISTPQC

is a disincentive for detailed cost evaluations. Consider the quandary

facing Caroline the Cryptanalyst:

* If Caroline does the work to analyze system S in a realistic cost

metric (e.g., 1981 Brent--Kung with appropriate parameter choices,

or a quantum variant of the same thing) then people who don't like

S will complain that Caroline is cheating and overestimating

security through the choice of metric.

* If Caroline instead picks a simpler cost metric that ignores the

costs of RAM etc. then maybe Caroline will have less work, but

people who like S will (correctly) complain that Caroline is

underestimating security.

Caroline would be able to justify an analysis by pointing to NIST's

request for a particular metric---but NIST hasn't defined a metric to

use. Is it a surprise, then, that detailed cost evaluations in NISTPQC

have been so rare? Is it a surprise that, to the extent they exist, they

don't agree on which cost metrics to use?

Simply counting the higher-level operations that naturally appear in

attacks doesn't run into these problems, but how are we supposed to

compare attacks against different systems? Or attacks against the same

system, when the underlying operations are different? (For example,

within published lattice attacks, quantum enumeration looks more

efficient than sieving in _some_ cost metrics, although in this case

there are also many unresolved questions about the higher-level

operation counts.)

Even worse, NIST has managed to make many casual observers (not the

careful observers such as Caroline!) believe that NIST _has_ defined

metrics to be used in NISTPQC. The unfortunate reality is that there's a

definitional void filled with unjustified and conflicting assumptions.

For example:

* Some people think that NIST "gates" assign low costs to memory (so

BHT uses 2^100 NIST "gates" and the call for proposals is wrong).

* Some people think that NIST "gates" assign high costs to memory

(which is why BHT doesn't beat 2^146 NIST "gates").

These incompatible beliefs end up being applied inconsistently to

different candidates, as we're seeing in discussions right now about

multiple finalists. The same lack of definitions means that nobody can

figure out the "category" boundaries, even for the small number of

submissions where operation counts in attacks are thoroughly understood.

This is a cross-cutting failure in the NISTPQC evaluation procedures.

NIST has the unique power to fix this mess by issuing a _complete_

definition of a metric (or multiple metrics if that's really needed) for

_all_ NISTPQC submissions to use. This means not just waving around

ambiguous words such as "gates", but clearly specifying _which_ gates.

We can use, say, vOW and BHT cost analyses as test cases for the clarity

of the definition, and then we can all go ahead with evaluating all

submissions in the same metric.

Will this be the "best" metric? Unlikely. Maybe it will be so poorly

chosen as to noticeably damage parameter selection. But we're starting

from major ongoing damage to the evaluation process as a direct result

of NIST's "category" boundaries not having clear definitions.

> > We agree that the second column is sometimes less than the fourth column.

> > However, the second column ignores the huge overheads measured by the

> > third column.

> No. The second column includes a logarithmic memory overhead.

Counting cost 1 for each bit of an index i is simply reading i; it is

not covering the far larger costs of retrieving x[i]. The attack is, as

the submission states, "bottlenecked by random access to a huge array";

it is not bottlenecked by inspecting the indices used for this access.

The picture to have in mind here is a RAM request moving a huge distance

down wires (or fiber or any other physical communication technology),

compared to inspecting bits locally.

In algorithm analysis, pointing to overhead beyond cost analysis X means

pointing to a cost that _isn't_ included in X. A reader who redefines

"overhead" to cover something that _is_ included in X is not correctly

presenting the original statement. This redefinition is also missing the

point in this case: what we care about are "the huge overheads" of

continual RAM access, since those are the bottlenecks in the attack.

> > As stated in the submission:

> > A closer look shows that the attack in [11] is bottlenecked by random

> > access to a huge array (much larger than the public key being

> > attacked), and that subsequent ISD variants use even more memory. The

> > same amount of hardware allows much more parallelism in attacking,

> > e.g., AES-256.

> This is the second time you've used the word "huge". Let's give it some

> perspective. 2^35.66 bits is less RAM than the laptop I'm using to write

> this.

It's also 10000 times larger than the public key. This is "a huge array

(much larger than the public key being attacked)", as the submission

says. The same circuit size "allows much more parallelism in attacking,

e.g., AES-256", as the submission says. See

https://www.sites.google.com/site/michaeljameswiener/dessearch.pdf

for an example (predating AES) of the efficiency of special-purpose

hardware for cipher attacks.

The laptop "perspective" wildly overestimates the costs of attacks, as

already established in the literature on attack hardware. The amount of

overestimation varies from one cryptosystem to another. This makes the

laptop "perspective" useless for serious security evaluation---so why

are we seeing this "perspective" appearing in NISTPQC discussions?

Answer: it's yet another random attempt to fill the void left by NIST's

failure to _define_ the NISTPQC security requirements. All sorts of

people interested in NISTPQC are being put in the position of figuring

out for themselves what the NISTPQC security levels mean, because the

only organization in a position to do the job centrally hasn't done it.

> > The submission goes on to state expectations regarding (e.g.) 6960119

> > being more expensive to break than AES-256. The numbers you quote (e.g.,

> > 2^271.18 operations on 2^47.58 bits of RAM) are consistent with these

> > expectations.

> Your submission and this reply focus almost exclusively on the

> mceliece-6960-119 parameter set. What about the other four?

The submission takes this example (using the words "as an example") to

explain the most important effects separating cost metrics in this case:

namely, the state-of-the-art attacks are "bottlenecked by random access

the same points for each parameter set, mutatis mutandis, would be

unilluminating.

Because of the unlimited variation in cost metrics, any parameter set

can be ranked anywhere from "overkill" to "too small" depending on which

cost metric is selected, as the submission spells out for this example.

The incorrect idea that this is specific to Classic McEliece was already

answered: "Note that one can make any cryptosystem sound much easier to

evaluations for attacks across all parameter sets in all submissions,

with AES-128, SHA-256, etc. as baselines---including how the evaluations

are changing over time, so that we can recognize attack improvements,

evaluate security stability, and evaluate maturity of analysis. However,

this process doesn't work until NIST defines a cost metric for all

NISTPQC submissions to use. Allowing cost metrics to be selected ad-hoc

for particular submissions is error-prone and subject to abuse.

> > The literature shows that optimized hardware for AES attacks has similar

> > efficiency to multipliers. The multipliers in an Intel CPU core carry

> > out millions of bit operations in the same time that it takes the core

> > to retrieve data from (say) 6GB of RAM, never mind RAM contention across

> > cores. GPUs and TPUs show the costs of RAM even more clearly. On a

> > larger scale, decades of supercomputing literature consistently fit a

> > square-root scaling of RAM costs, and science-fiction 3D computers

> > (which seem much harder to build than quantum computers) would still

> > have cube-root costs.

> No. It's not as simple as applying a square-root overhead for memory across

> the board. For an interesting (but not very rigorous) example see

> http://www.ilikebigbits.com/2014_04_21_myth_of_ram_1.html. The blog argues

> for a square-root memory cost. If you look at the graphs this is a poor

> fit in the 10 MiB to 6 GiB range.

Across seven orders of magnitude of array sizes, the graphs in that blog

post are always within one order of magnitude of the square-root

prediction (and people who understand the microarchitecture already know

where the deviations come from). If you look at the supercomputing

literature you'll see the same basic scaling across further orders of

magnitude. (See, e.g., https://cr.yp.to/papers.html#nonuniform, Appendix

Q.6, comparing the records from https://sortbenchmark.org at two

different sizes for the number of 100-byte items sorted per joule.)

There's also a clear physical mechanism producing the square-root

overhead: namely, moving a bit across distance sqrt(N) consists of

sqrt(N) motions through distance 1. Each motion through distance 1

occupies a constant amount of hardware for a constant amount of time,

the same way that a bit operation occupies a constant amount of hardware

for a constant amount of time. Overall the motion through distance

sqrt(N) is occupying Theta(sqrt(N)) times more hardware (per unit time)

than a bit operation is, where Theta covers the ratios between the

relevant constants.

This basic throughput barrier is exactly what one sees in the 1981

Brent--Kung theorem, which obtains the same square root for a broad

class of plausible machine models:

https://maths-people.anu.edu.au/~brent/pd/rpb055.pdf

Fiber doesn't magically avoid this throughput barrier: the hardware is

still filled up by the data being communicated. Free-space lasers might

seem to avoid the cost of equipment in the middle, but they spread out

linearly with distance and don't end up improving scalability.

The "holographic principle" in physics says that the information in a

volume of space is actually two-dimensional. (This won't surprise

physics students who have already practiced converting three-dimensional

to two-dimensional integrals.) The numerical limits don't say that

existing technology is close to optimal, but they do say that anyone

claiming better than square-root scaling is wrong for large enough sizes

and thus has a ton of explaining to do regarding cryptographic sizes.

An easy mistake to make here is to cherry-pick some news story about a

technology improvement at some medium size, and compare it to unimproved

technology at a small size (e.g., the laptop "perspective"), while

failing to ask whether technology improvements are also following the

expected square-root curve down to small sizes.

> Not all memory accesses are the same. The memory overhead depends on the

> number of random accesses to high-latency memory.

I agree that memory accesses differ in general. However, the attacks

we're talking about here are bottlenecked by random access to a huge

array ("much larger than the public key being attacked").

Note that one can replace parallel random access with, e.g., bitonic

sorting, at which point the word "random" might seem misleading since

at a low level all accesses are through a predefined network. What

really matters, as the Brent--Kung proof clearly shows, is simply how

much data is moving across long distances.

> One cost model in Ray Perlner's slides assumes that up to 2^50 bits of

> memory is local with unit cost memory accesses.

This was already answered: "The multipliers in an Intel CPU core carry

anyone to be sure what's included in "category 3".

This isn't specific to Classic McEliece. Every submission is in the same

position of uncertainty regarding what NIST's security requirements

mean. Beyond actual security risks, this creates an artificial risk of

being criticized for allegedly not meeting someone's interpretation of

NIST's security requirements. There is no way to avoid this risk: _any_

"category" assignment of _any_ parameter set in _any_ submission can be

targeted by a metric-selection attack. The way that these criticisms are

allocated to submissions has nothing to do with security; it is a

political game of ad-hoc marketing of cost metrics.

Regarding citations, the Classic McEliece submission cites the original

attack papers. Those papers, in turn, convincingly count the operations

that they use. Each operation count corresponds to many different cost

numbers for many different choices of cost metric:

attack analysis cost metric 1

attack ---------------> operation count -------------> cost 1

\ \ \ \ \ cost metric 2

\ \ \ \ \-------------> cost 2

\ \ \ \ cost metric 3

\ \ \ \-------------> cost 3

\ \ \ cost metric 4

\ \ \-------------> cost 4

\ \ cost metric 5

\ \-------------> cost 5

\ cost metric 6

\-------------> cost 6

etc.

This correspondence is conceptually straightforward, but _which cost

metric are submitters supposed to use for NISTPQC_? Nobody knows.

There are other submissions where the operations in attacks are much

harder to count, but if we optimistically imagine this problem being

fixed then we're still faced with the same basic cost-metric question

for all submissions, as noted above.

Submitters could start from Perlner's recent "preliminary" slides, which

run through various possible features of metrics. How many metrics are

clearly defined in those slides, with a NIST commitment to treat attacks

in those metrics as being relevant? Zero. So what exactly are submitters

supposed to do? For each feature listed by Perlner, try to guess the

sanest-sounding definition of a specific metric that seems consistent

with that feature, and then work through a pile of cost evaluations in

all of these metrics, while being pretty sure that NIST will disregard

most, perhaps all, of the resulting numbers? See how crazy this is?

> You are arguing that because NIST has not specified a single metric

> no evaluation in any metric can be valid. The Call for Proposals actually

> says (NIST's emphasis):

> "In order for a cryptosystem to satisfy one of the above security

> requirements, any attack must require computational resources

> comparable to or greater than the stated threshold, with respect

> to *all* metrics that NIST deems to be potentially relevant to

> practical security."

> NIST gives circuit size as an example of a metric

No.

As a baseline, I think everyone agrees that the structure specified in

the call for proposals is as follows:

* There's a set M of metrics to be used for NISTPQC (the "metrics

that NIST deems to be potentially relevant to practical security").

* Cryptosystem X is eliminated from category 1 (and thus from

NISTPQC) if there's even _one_ metric in M saying that attacking X

costs less than AES-128 key search.

* Cryptosystem X is eliminated from category 2 if one metric in M

says that attacking X costs less than SHA-256 collision search.

* Similarly for other categories.

But what's M? Can we even point to _one_ clear example of something in

M? Unfortunately not. If you look for a NIST document that clearly

defines a metric and specifies that the metric is in this set M, you

won't find any such document!

For example, it's _not_ true that there's some clearly defined "circuit

size" metric specified to be in M, and it's not true that there's some

clearly defined "gates" metric specified to be in M. A clear definition

of NIST "gates" would let the community understand and verify the claim

in the call for proposals that known SHA3-256 attacks, such as BHT,

don't beat 2^146 NIST "gates". Right now this is unverifiable,

unfalsifiable, mathematically meaningless, since there's no definition.

> and says that AES-192 key

> recovery is equivalent to 2^207 classical gates in this metric

Unfortunately NIST never defined "this metric". See above.

> That's it. No mention of the other parameter sets. No attempt to quantify

> the impact of memory accesses.

The literature already pinpoints the number of memory accesses for

arbitrary input sizes. As soon as NIST defines a cost metric, an

analysis in that metric will be straightforward.

> Just a vague statement that something that is "considerably below

> 2^256" bit operations will be fine because memory.

The actual "will be fine" statement is "more expensive to break than

AES-256". This is not vague. It has the cost metric as a parameter, and

of course one _can_ define cost metrics that reverse the comparison by

ignoring important components of costs, but, again, having to allow

every possible metric would undermine _every_ category assignment in

_every_ submission.

Quantifying "considerably below 2^256" depends on the cost metric. The

minimum "time" in the literature is achieved by algorithms using absurd

amounts of memory, and those algorithms are optimal in metrics that allow

x[i] with cost 1, but gradually increasing the RAM cost in the metrics

(compared to other costs) shifts the optimum to different algorithms.

> This is the irrelevant hype I mean. You are ignoring asymptotic improvements

> to half-distance decoding because McEliece uses errors with sublinear weight

Carefully quantifying the application of attack ideas to a cryptosystem,

and correctly concluding that the asymptotic change in security levels

is exactly 0%, is not "ignoring" anything.

The 0% change has always been clearly and correctly labeled as being

asymptotic. This means that there's <10% change once the security level

is large enough, <1% change once the security level is large enough,

etc. Regarding the question of what "large enough" means, the submission

cites the relevant attack papers and emphasizes the importance of

concrete cost analysis.

Seeing that lattices have a worse picture here---e.g., the asymptotic

lattice security levels were 42% higher just ten years ago---is a risk

indicator. Nobody says it's the only risk indicator! Classic McEliece

has established a strong track record of observing many other risk

indicators and acting accordingly.

This is important. History suggests that identifying and monitoring risk

indicators is one of the most effective ways that system designers can

avoid disasters. There's ample evidence for this in the general

risk-management literature, and there's quite a bit of evidence in

cryptography.

Consider, for example, pre-quantum discrete-logarithm-based

cryptography. There was already a long history by 2005 of

* multiplicative groups suffering percentage losses (and sometimes

larger losses) of asymptotic security levels, including losses

avoided by elliptic curves, and

* fields with extra automorphisms suffering percentage losses of

asymptotic security levels, including losses avoided by prime

fields.

Public recommendations from cryptographers monitoring the literature

concluded that multiplicative groups were higher risk than elliptic

curves, and that fields with extra automorphisms were higher risk than

prime fields. (Of course, people desperate to paint the opposite picture

could always come up with reasons: the RSA company was still claiming

that elliptic curves hadn't been studied, for example, and there was

also an interesting story regarding "medium" fields.) Unsurprisingly, we

saw big further security losses after that for multiplicative groups

(for some highlights see https://blog.cr.yp.to/20161030-pqnist.html),

especially in the case of fields with extra automorphisms.

What does this tell us about lattices vs. McEliece? Should we be on the

alert for percentage losses of asymptotic security levels (many losses

for lattices, nothing for McEliece)? Larger losses (yes for lattices, no

for McEliece)? Unnecessary automorphisms? All of the above?

If there's a rational argument that a particular risk indicator _isn't_

helpful, the argument should be published for further analysis. Posting

one message after another labeling a risk indicator as "irrelevant hype"

doesn't move the analysis forward, and seems hard to reconcile with the

thoroughly documented discrete-log history.

> yet compare it with asymptotic improvements to the exact short vector problem

> even though lattice schemes depend on different variants of the problem.

Actually, the SVP security levels being asymptotically 42% higher in

2010 than today translates directly into the security levels of

(otherwise unbroken) lattice systems being asymptotically 42% higher in

2010 than today. Basically, the lattice attacks are bottlenecked by BKZ,

and BKZ is bottlenecked by a relatively short series of expensive

shortest-vector searches in a lower dimension (beta, the "block size"),

so if the SVP solution loses P% of its asymptotic bits of security then

the whole attack loses P% of its asymptotic bits of security.

(BKZ is one of many algorithms, and ongoing research indicates that SVP

isn't really the right interface, but these issues don't seem to affect

the exponent. A bigger issue is hybrid attacks: my message on that topic

in July outlines a way to remove another percentage from the security

level for any system with errors in, say, {-3,...,3}, so the loss for

lattice security compared to 2010 will be even larger than the SVP loss.

Some hybrid-attack details still need verification, and the percentage

hasn't been calculated yet.)

For comparison, consider https://eprint.iacr.org/2017/1139.pdf reporting

exponent 0.0465 for half-distance decoding, where Prange's algorithm

from 51 years earier was exponent 0.0576; see the paper for a survey of

intermediate results. This asymptotic 24% change corresponds to an

asymptotic 0% change in the asymptotic security level of McEliece's

system. The situation for McEliece's system has always been that the

asymptotic bits of security come primarily from a gigantic combinatorial

search through information sets.

To summarize: The graph in https://video.cr.yp.to/2020/0813/video.html

correctly compares asymptotic attack costs, as the label indicates. In

particular, the asymptotic security level of otherwise unbroken lattice

cryptosystems has dropped as dramatically as shown in the graph. (Also,

recent work, once quantified, should produce further lattice losses.)

> > The section goes on to summarize what the literature says about concrete

> > cost analyses, and ends with the AES-256 comparison reviewed above.

> The rest of section 8.2 is quoted above in its entirety. The literature

> says much more about concrete cost analyses than this "summary".

"Summary, noun: a brief statement or account of the main points of

something." I agree that the literature says much more than the summary

does.

> > Evaluating such a claim would require a clear, stable definition of

> > "Category 3". If NIST settles on a definition to be used in NISTPQC then

> > submitters will be able to make "category" assignments accordingly.

> > See above regarding the actual costs of accessing 6GB of RAM.

> No. The definition of category 3 is stable.

No, it isn't. NIST has never issued a clear, stable definition of

"Category 3". NIST has given a pseudo-definition---something that looks

at first glance like a definition but that turns out to rely on

undefined metrics. How are submitters supposed to evaluate costs in cost

metrics that haven't been defined? See above.

> 2^189.2 classical gates does not mean 2^189.2 random memory

> accesses. Not all gates involve a memory access. Not all memory accesses

> are to high-latency memory. Not all high-latency memory accesses are

> random.

It's of course true across the field of cryptanalysis that attacks vary

in how often they're randomly accessing memory. However, all of the

attacks we're discussing here are bottlenecked by random access.

> My point is that you can't just say that everything will be fine

> because memory. You actually need to do the analysis.

The attack paper [11] cited in exactly this section of the submission

already includes a detailed cost analysis of Stern's attack with many

further optimizations. The inner loop consists of picking up a number

that looks random, xor'ing it into an index i, randomly accessing x[i],

and going back to the top of the loop.

As the submission says, this attack is "bottlenecked by random access".

Newer attacks are similarly memory-bound, using even more memory.

The operation counts are clear (for this submission; again, there are

other submissions where attacks are much harder to analyze). The notion

that there's some missing analysis of how many RAM accesses there are is

simply not true. What's missing is a definition from NIST of how NISTPQC

assigns costs to x[i] and other operations.

> > There's also a tremendous amount to say about the dangers of unnecessary

> > complexity. One aspect of this is specifying more parameter sets than

> > necessary. Of course, NIST's request for lower-security parameter sets

> > was an offer that Classic McEliece couldn't refuse, but we continue to

> > recommend the 2^256 parameter sets.

> I assume that if Classic McEliece is chosen for standardization you will

> be withdrawing the parameter sets that you don't recommend for use.

Procedurally, NIST is in charge, and has always specifically reserved

the right to make its own parameter-set decisions after it consults with

the submitters and the community. See the call for proposals. Obviously

submission teams and other interested parties will provide feedback to

NIST regarding the wisdom of their decisions.

> > One can easily write down even larger parameter sets. Also, the ISD

> > attack analyses in the literature are precise and straightforward, and

> > it should be easy to evaluate the security of any desired parameters in

> > any well-defined metric. However, the right order of events is, first,

> > for NIST to fully define the cost metric to be used for "categories",

> > and, second, for all submission teams to evaluate costs in this metric.

> No. The right order of events is, first, submit parameter sets with security

> estimates in a metric that you think is practically relevant and, second,

> let eveyone else spend three rounds of the NIST process evaluating those

> claims. That's what most of other submissions have done.

No, it isn't. Let's take the Kyber submission as a case study of what

other submissions have in fact done.

The round-3 Kyber submission has an analysis concluding that attacks

against round-3 Kyber-512 are probably---accounting for the "known

unknowns"---somewhere between 2^135 and 2^167 "gates", a remarkably wide

range for a cryptosystem that claims its security is well understood.

The submission hopes that the "known unknowns" will be pinned down, and

that the final answer will be above 2^143. The submission also claims to

use a "conservative lower bound" on attack costs. Can NIST see whether

this "bound" is above or below 2^143? Can anyone else see this? Nope.

Could someone have been reviewing the analysis since the first round of

the NIST process? No. The parameter set has changed. The analysis has

changed. The problem that cryptanalysts are being asked to study has

changed! Kyber-512 used to claim that its security was based on the

MLWE problem, but, as far as I can tell, this has been dropped in round

3. Concretely, my understanding is that if someone claims an attack

against round-3 Kyber-512 by showing that MLWE for that size is easier

to break than AES-128, the official response will be that the round-3

submission does _not_ claim MLWE attacks are so hard---its security

levels are claimed only for direct IND-CPA/IND-CCA2 attacks. (I've

asked for confirmation of this in a separate thread on Kyber.)

So much for spending "three rounds of the NIST process evaluating those

claims", or, in the words of the call for proposals, "maturity of

analysis". Now let's move on to the "metric" question.

Let's say Caroline the Cryptanalyst is looking at round-3 Kyber-512.

What would be required, structurally, for Caroline to show that round-3

Kyber-512 is broken? Let's assume for simplicity that Kyber "gates" are

clearly defined and that AES-128 key search needs 2^143 Kyber "gates".

Here are two possibilities for Caroline:

* Strategy 1: Show that the best attacks considered in the submission

actually use fewer than 2^143 "gates". This is compatible with the

2^135...2^167 range claimed in the submission.

* Strategy 2: Show that attacks not considered in the submission,

such as hybrid attacks, use fewer than 2^143 "gates".

These would both be breaks, since they'd violate the minimum for

category 1, right? Nope. The submission protects itself against this by

leaving wiggle room in the cost metric. Here's the critical quote:

We do not think that [2^135] would be catastrophic, in particular

given the massive memory requirements ...

The submission is saying that 2^135 "gates" wouldn't actually be a

problem (assuming the attacks use tons of memory, which most, although

not all, lattice attacks do), since of course memory costs much more

than a "gate" count indicates.

Should Caroline expect NIST to downplay the memory requirements and

accept the attack as a break? Why?

NIST has repeatedly asked for "confidence" that the NISTPQC security

requirements are reached. I challenged the security of round-2

Kyber-512, pointing out errors in the security analysis and pointing out

reasons to believe that known attacks don't use more "gates" than

AES-128. Did NIST claim "confidence" that the attacks actually use more

"gates" than AES-128? No. Did NIST kick out that parameter set? No.

Instead NIST initiated a "preliminary", open-ended project to start

incorporating overheads into cost metrics.

Does this sound like NIST automatically kicking out any parameter set

when there's an attack below 2^143 "gates", with free RAM "gates"? No.

It sounds like NIST shifting the boundaries of "category 1" as far as

necessary to declare that Kyber-512 isn't broken, by playing games with

the cost metric. Maybe NIST has another reason for not specifying a

metric, but in any case the moving target deters cryptanalysis.

Part of recognizing security risks is recognizing the procedures that

are actually being used for designing, evaluating, and standardizing

cryptography. Cryptanalytic work is a critical part of this, and NISTPQC

is raising far more security questions than cryptanalysts can answer in

the time available. It's not as if the attack picture was stable in

2020, so why should we expect it to suddenly be stable in 2021, or in

2025? This denial-of-service attack against cryptanalysts starts from

the inherent cryptanalytic complexity of post-quantum cryptography, but

the attack is exacerbated by NIST's continued failure to clarify its

pseudo-definition of the minimum allowed NISTPQC security level.

---Dan

specific to Classic McEliece have already been answered. There are some

broader NISTPQC process points here for NIST to address (as shown by the

inconsistent handling of different cryptosystems), and there are some

factual errors that I'll correct for the record.

As a starting point, there appears to be a misconception that NIST has

already defined metrics to be used for the NISTPQC "categories": namely,

(non-quantum and quantum) "gates" metrics, ignoring the costs of RAM.

If this were true then I would agree that all submissions have to report

security in these metrics. But it isn't true. NIST has never issued any

such definition. There are many different definitions of (non-quantum

and quantum) "gates" in the literature, with many variations (e.g.,

sometimes RAM gates, sometimes not), often producing radically different

cost numbers; NIST has never said which "gates" we're supposed to use.

The hard way to check this is to dig through the complete NISTPQC

archives and look at what NIST has written. The easy way, for people who

know some quantum algorithms, is to consider the following two facts:

* The NISTPQC call for proposals claims that known SHA3-256 collision

attacks need 2^146 "gates", with no quantum speedups.

* It's easy to find SHA3-256 collisions in only about 2^85 oracle

queries, which is in the ballpark of 2^100 quantum "gates" with the

"gates" defined in, e.g., Ambainis's famous distinctness paper and

used in many other quantum-walk papers.

To see the 2^85, apply Ambainis's algorithm to 128-bit strings

after a random prefix. Or, simpler and presumably faster, apply the

BHT (Brassard--Hoyer--Tapp) algorithm to 256-bit strings. This

means a Grover search through 170-bit inputs for hashes appearing

in a table of 2^86 hashes of other inputs.

To see the 2^100, start from the 2^85 and look at the cost of

SHA3-256 evaluation. Maybe ends up slightly over 100, but maybe

not: one can handle a large oracle cost by adjusting 170 down and

86 up. The exact number doesn't matter for anything I'll say here.

Given the gap between 2^100 and 2^146, why does the call for proposals

ignore BHT, Ambainis, etc.? Anyone who thinks that NIST has defined

"gates", and that NIST's "gates" allow free memory, will have trouble

explaining this.

It's clear how many _operations_ are used by (e.g.) BHT, but turning the

operation count into a cost number to compare to other attacks requires

defining a cost metric. Novices might think that the choice of cost

metric is a matter of "low-order details", but the gap between 2^146 and

2^100 is bigger than most of the attack advances we've seen in NISTPQC,

including advances that NIST has used to publicly justify its decisions.

It's almost as big as the gap between Kyber-768 and what the New Hope

paper calls "JarJar". It's bigger than the pre-quantum gap between

RSA-2048 and RSA-1024.

Once upon a time, NIST cited https://cr.yp.to/papers.html#collisioncost

as justification for ignoring BHT. But that paper doesn't use a "gates"

metric, so this doesn't salvage the idea that NIST has defined a "gates"

metric. That paper also takes RAM costs into account in exponents, which

is the opposite direction from treating RAM as free.

Ray Perlner claimed in August 2020 that quantum RAM gates are "much less

realistic" than non-quantum RAM gates. This sounds very wrong to me---

cost 1 for an unlimited-size array lookup is not reality, and will never

be reality, whether quantum or not---but in any case it doesn't resolve

the more basic problem of _not having a definition_.

Where's the definition that says exactly which gates are included in

NIST's "gates"? If the situation is supposed to be that NIST's "quantum

gates" exclude RAM gates while, bizarrely, NIST's "classical gates"

_include_ RAM gates, then we should all be able to confidently see this

by looking at NIST's definition. But there is no definition!

The fact that NIST has never defined a cost metric to be used in NISTPQC

is a disincentive for detailed cost evaluations. Consider the quandary

facing Caroline the Cryptanalyst:

* If Caroline does the work to analyze system S in a realistic cost

metric (e.g., 1981 Brent--Kung with appropriate parameter choices,

or a quantum variant of the same thing) then people who don't like

S will complain that Caroline is cheating and overestimating

security through the choice of metric.

* If Caroline instead picks a simpler cost metric that ignores the

costs of RAM etc. then maybe Caroline will have less work, but

people who like S will (correctly) complain that Caroline is

underestimating security.

Caroline would be able to justify an analysis by pointing to NIST's

request for a particular metric---but NIST hasn't defined a metric to

use. Is it a surprise, then, that detailed cost evaluations in NISTPQC

have been so rare? Is it a surprise that, to the extent they exist, they

don't agree on which cost metrics to use?

Simply counting the higher-level operations that naturally appear in

attacks doesn't run into these problems, but how are we supposed to

compare attacks against different systems? Or attacks against the same

system, when the underlying operations are different? (For example,

within published lattice attacks, quantum enumeration looks more

efficient than sieving in _some_ cost metrics, although in this case

there are also many unresolved questions about the higher-level

operation counts.)

Even worse, NIST has managed to make many casual observers (not the

careful observers such as Caroline!) believe that NIST _has_ defined

metrics to be used in NISTPQC. The unfortunate reality is that there's a

definitional void filled with unjustified and conflicting assumptions.

For example:

* Some people think that NIST "gates" assign low costs to memory (so

BHT uses 2^100 NIST "gates" and the call for proposals is wrong).

* Some people think that NIST "gates" assign high costs to memory

(which is why BHT doesn't beat 2^146 NIST "gates").

These incompatible beliefs end up being applied inconsistently to

different candidates, as we're seeing in discussions right now about

multiple finalists. The same lack of definitions means that nobody can

figure out the "category" boundaries, even for the small number of

submissions where operation counts in attacks are thoroughly understood.

This is a cross-cutting failure in the NISTPQC evaluation procedures.

NIST has the unique power to fix this mess by issuing a _complete_

definition of a metric (or multiple metrics if that's really needed) for

_all_ NISTPQC submissions to use. This means not just waving around

ambiguous words such as "gates", but clearly specifying _which_ gates.

We can use, say, vOW and BHT cost analyses as test cases for the clarity

of the definition, and then we can all go ahead with evaluating all

submissions in the same metric.

Will this be the "best" metric? Unlikely. Maybe it will be so poorly

chosen as to noticeably damage parameter selection. But we're starting

from major ongoing damage to the evaluation process as a direct result

of NIST's "category" boundaries not having clear definitions.

> > We agree that the second column is sometimes less than the fourth column.

> > However, the second column ignores the huge overheads measured by the

> > third column.

> No. The second column includes a logarithmic memory overhead.

not covering the far larger costs of retrieving x[i]. The attack is, as

the submission states, "bottlenecked by random access to a huge array";

it is not bottlenecked by inspecting the indices used for this access.

The picture to have in mind here is a RAM request moving a huge distance

down wires (or fiber or any other physical communication technology),

compared to inspecting bits locally.

In algorithm analysis, pointing to overhead beyond cost analysis X means

pointing to a cost that _isn't_ included in X. A reader who redefines

"overhead" to cover something that _is_ included in X is not correctly

presenting the original statement. This redefinition is also missing the

point in this case: what we care about are "the huge overheads" of

continual RAM access, since those are the bottlenecks in the attack.

> > As stated in the submission:

> > A closer look shows that the attack in [11] is bottlenecked by random

> > access to a huge array (much larger than the public key being

> > attacked), and that subsequent ISD variants use even more memory. The

> > same amount of hardware allows much more parallelism in attacking,

> > e.g., AES-256.

> This is the second time you've used the word "huge". Let's give it some

> perspective. 2^35.66 bits is less RAM than the laptop I'm using to write

> this.

(much larger than the public key being attacked)", as the submission

says. The same circuit size "allows much more parallelism in attacking,

e.g., AES-256", as the submission says. See

https://www.sites.google.com/site/michaeljameswiener/dessearch.pdf

for an example (predating AES) of the efficiency of special-purpose

hardware for cipher attacks.

The laptop "perspective" wildly overestimates the costs of attacks, as

already established in the literature on attack hardware. The amount of

overestimation varies from one cryptosystem to another. This makes the

laptop "perspective" useless for serious security evaluation---so why

are we seeing this "perspective" appearing in NISTPQC discussions?

Answer: it's yet another random attempt to fill the void left by NIST's

failure to _define_ the NISTPQC security requirements. All sorts of

people interested in NISTPQC are being put in the position of figuring

out for themselves what the NISTPQC security levels mean, because the

only organization in a position to do the job centrally hasn't done it.

> > The submission goes on to state expectations regarding (e.g.) 6960119

> > being more expensive to break than AES-256. The numbers you quote (e.g.,

> > 2^271.18 operations on 2^47.58 bits of RAM) are consistent with these

> > expectations.

> Your submission and this reply focus almost exclusively on the

> mceliece-6960-119 parameter set. What about the other four?

explain the most important effects separating cost metrics in this case:

namely, the state-of-the-art attacks are "bottlenecked by random access

to a huge array (much larger than the public key being attacked)", and

quantum attacks suffer "much more overhead in the inner loop". Repeating
the same points for each parameter set, mutatis mutandis, would be

unilluminating.

Because of the unlimited variation in cost metrics, any parameter set

can be ranked anywhere from "overkill" to "too small" depending on which

cost metric is selected, as the submission spells out for this example.

The incorrect idea that this is specific to Classic McEliece was already

answered: "Note that one can make any cryptosystem sound much easier to

break by choosing a cost metric that assigns unrealistically low cost to

operations used in attacks; RAM access is just one example. If the same

operations are not bottlenecks in attacks against AES-m or SHA-n then

this can reverse comparisons to AES-m or SHA-n respectively."

I'm very much in favor of building tables of publicly verified cost
operations used in attacks; RAM access is just one example. If the same

operations are not bottlenecks in attacks against AES-m or SHA-n then

this can reverse comparisons to AES-m or SHA-n respectively."

evaluations for attacks across all parameter sets in all submissions,

with AES-128, SHA-256, etc. as baselines---including how the evaluations

are changing over time, so that we can recognize attack improvements,

evaluate security stability, and evaluate maturity of analysis. However,

this process doesn't work until NIST defines a cost metric for all

NISTPQC submissions to use. Allowing cost metrics to be selected ad-hoc

for particular submissions is error-prone and subject to abuse.

> > The literature shows that optimized hardware for AES attacks has similar

> > efficiency to multipliers. The multipliers in an Intel CPU core carry

> > out millions of bit operations in the same time that it takes the core

> > to retrieve data from (say) 6GB of RAM, never mind RAM contention across

> > cores. GPUs and TPUs show the costs of RAM even more clearly. On a

> > larger scale, decades of supercomputing literature consistently fit a

> > square-root scaling of RAM costs, and science-fiction 3D computers

> > (which seem much harder to build than quantum computers) would still

> > have cube-root costs.

> No. It's not as simple as applying a square-root overhead for memory across

> the board. For an interesting (but not very rigorous) example see

> http://www.ilikebigbits.com/2014_04_21_myth_of_ram_1.html. The blog argues

> for a square-root memory cost. If you look at the graphs this is a poor

> fit in the 10 MiB to 6 GiB range.

post are always within one order of magnitude of the square-root

prediction (and people who understand the microarchitecture already know

where the deviations come from). If you look at the supercomputing

literature you'll see the same basic scaling across further orders of

magnitude. (See, e.g., https://cr.yp.to/papers.html#nonuniform, Appendix

Q.6, comparing the records from https://sortbenchmark.org at two

different sizes for the number of 100-byte items sorted per joule.)

There's also a clear physical mechanism producing the square-root

overhead: namely, moving a bit across distance sqrt(N) consists of

sqrt(N) motions through distance 1. Each motion through distance 1

occupies a constant amount of hardware for a constant amount of time,

the same way that a bit operation occupies a constant amount of hardware

for a constant amount of time. Overall the motion through distance

sqrt(N) is occupying Theta(sqrt(N)) times more hardware (per unit time)

than a bit operation is, where Theta covers the ratios between the

relevant constants.

This basic throughput barrier is exactly what one sees in the 1981

Brent--Kung theorem, which obtains the same square root for a broad

class of plausible machine models:

https://maths-people.anu.edu.au/~brent/pd/rpb055.pdf

Fiber doesn't magically avoid this throughput barrier: the hardware is

still filled up by the data being communicated. Free-space lasers might

seem to avoid the cost of equipment in the middle, but they spread out

linearly with distance and don't end up improving scalability.

The "holographic principle" in physics says that the information in a

volume of space is actually two-dimensional. (This won't surprise

physics students who have already practiced converting three-dimensional

to two-dimensional integrals.) The numerical limits don't say that

existing technology is close to optimal, but they do say that anyone

claiming better than square-root scaling is wrong for large enough sizes

and thus has a ton of explaining to do regarding cryptographic sizes.

An easy mistake to make here is to cherry-pick some news story about a

technology improvement at some medium size, and compare it to unimproved

technology at a small size (e.g., the laptop "perspective"), while

failing to ask whether technology improvements are also following the

expected square-root curve down to small sizes.

> Not all memory accesses are the same. The memory overhead depends on the

> number of random accesses to high-latency memory.

we're talking about here are bottlenecked by random access to a huge

array ("much larger than the public key being attacked").

Note that one can replace parallel random access with, e.g., bitonic

sorting, at which point the word "random" might seem misleading since

at a low level all accesses are through a predefined network. What

really matters, as the Brent--Kung proof clearly shows, is simply how

much data is moving across long distances.

> One cost model in Ray Perlner's slides assumes that up to 2^50 bits of

> memory is local with unit cost memory accesses.

out millions of bit operations in the same time that it takes the core

to retrieve data from (say) 6GB of RAM, never mind RAM contention across

cores."

to retrieve data from (say) 6GB of RAM, never mind RAM contention across

cores."

> Can you point to the paper cited by your

> submission that gives numbers for the mceliece-4608-096 parameter set

> which clearly justify its assignment as category 3?

NIST has failed to define the metrics to be used, so there's no way for
> submission that gives numbers for the mceliece-4608-096 parameter set

> which clearly justify its assignment as category 3?

anyone to be sure what's included in "category 3".

This isn't specific to Classic McEliece. Every submission is in the same

position of uncertainty regarding what NIST's security requirements

mean. Beyond actual security risks, this creates an artificial risk of

being criticized for allegedly not meeting someone's interpretation of

NIST's security requirements. There is no way to avoid this risk: _any_

"category" assignment of _any_ parameter set in _any_ submission can be

targeted by a metric-selection attack. The way that these criticisms are

allocated to submissions has nothing to do with security; it is a

political game of ad-hoc marketing of cost metrics.

Regarding citations, the Classic McEliece submission cites the original

attack papers. Those papers, in turn, convincingly count the operations

that they use. Each operation count corresponds to many different cost

numbers for many different choices of cost metric:

attack analysis cost metric 1

attack ---------------> operation count -------------> cost 1

\ \ \ \ \ cost metric 2

\ \ \ \ \-------------> cost 2

\ \ \ \ cost metric 3

\ \ \ \-------------> cost 3

\ \ \ cost metric 4

\ \ \-------------> cost 4

\ \ cost metric 5

\ \-------------> cost 5

\ cost metric 6

\-------------> cost 6

etc.

This correspondence is conceptually straightforward, but _which cost

metric are submitters supposed to use for NISTPQC_? Nobody knows.

There are other submissions where the operations in attacks are much

harder to count, but if we optimistically imagine this problem being

fixed then we're still faced with the same basic cost-metric question

for all submissions, as noted above.

Submitters could start from Perlner's recent "preliminary" slides, which

run through various possible features of metrics. How many metrics are

clearly defined in those slides, with a NIST commitment to treat attacks

in those metrics as being relevant? Zero. So what exactly are submitters

supposed to do? For each feature listed by Perlner, try to guess the

sanest-sounding definition of a specific metric that seems consistent

with that feature, and then work through a pile of cost evaluations in

all of these metrics, while being pretty sure that NIST will disregard

most, perhaps all, of the resulting numbers? See how crazy this is?

> You are arguing that because NIST has not specified a single metric

> no evaluation in any metric can be valid. The Call for Proposals actually

> says (NIST's emphasis):

> "In order for a cryptosystem to satisfy one of the above security

> requirements, any attack must require computational resources

> comparable to or greater than the stated threshold, with respect

> to *all* metrics that NIST deems to be potentially relevant to

> practical security."

> NIST gives circuit size as an example of a metric

As a baseline, I think everyone agrees that the structure specified in

the call for proposals is as follows:

* There's a set M of metrics to be used for NISTPQC (the "metrics

that NIST deems to be potentially relevant to practical security").

* Cryptosystem X is eliminated from category 1 (and thus from

NISTPQC) if there's even _one_ metric in M saying that attacking X

costs less than AES-128 key search.

* Cryptosystem X is eliminated from category 2 if one metric in M

says that attacking X costs less than SHA-256 collision search.

* Similarly for other categories.

But what's M? Can we even point to _one_ clear example of something in

M? Unfortunately not. If you look for a NIST document that clearly

defines a metric and specifies that the metric is in this set M, you

won't find any such document!

For example, it's _not_ true that there's some clearly defined "circuit

size" metric specified to be in M, and it's not true that there's some

clearly defined "gates" metric specified to be in M. A clear definition

of NIST "gates" would let the community understand and verify the claim

in the call for proposals that known SHA3-256 attacks, such as BHT,

don't beat 2^146 NIST "gates". Right now this is unverifiable,

unfalsifiable, mathematically meaningless, since there's no definition.

> and says that AES-192 key

> recovery is equivalent to 2^207 classical gates in this metric

> That's it. No mention of the other parameter sets. No attempt to quantify

> the impact of memory accesses.

arbitrary input sizes. As soon as NIST defines a cost metric, an

analysis in that metric will be straightforward.

> Just a vague statement that something that is "considerably below

> 2^256" bit operations will be fine because memory.

AES-256". This is not vague. It has the cost metric as a parameter, and

of course one _can_ define cost metrics that reverse the comparison by

ignoring important components of costs, but, again, having to allow

every possible metric would undermine _every_ category assignment in

_every_ submission.

Quantifying "considerably below 2^256" depends on the cost metric. The

minimum "time" in the literature is achieved by algorithms using absurd

amounts of memory, and those algorithms are optimal in metrics that allow

x[i] with cost 1, but gradually increasing the RAM cost in the metrics

(compared to other costs) shifts the optimum to different algorithms.

> This is the irrelevant hype I mean. You are ignoring asymptotic improvements

> to half-distance decoding because McEliece uses errors with sublinear weight

and correctly concluding that the asymptotic change in security levels

is exactly 0%, is not "ignoring" anything.

The 0% change has always been clearly and correctly labeled as being

asymptotic. This means that there's <10% change once the security level

is large enough, <1% change once the security level is large enough,

etc. Regarding the question of what "large enough" means, the submission

cites the relevant attack papers and emphasizes the importance of

concrete cost analysis.

Seeing that lattices have a worse picture here---e.g., the asymptotic

lattice security levels were 42% higher just ten years ago---is a risk

indicator. Nobody says it's the only risk indicator! Classic McEliece

has established a strong track record of observing many other risk

indicators and acting accordingly.

This is important. History suggests that identifying and monitoring risk

indicators is one of the most effective ways that system designers can

avoid disasters. There's ample evidence for this in the general

risk-management literature, and there's quite a bit of evidence in

cryptography.

Consider, for example, pre-quantum discrete-logarithm-based

cryptography. There was already a long history by 2005 of

* multiplicative groups suffering percentage losses (and sometimes

larger losses) of asymptotic security levels, including losses

avoided by elliptic curves, and

* fields with extra automorphisms suffering percentage losses of

asymptotic security levels, including losses avoided by prime

fields.

Public recommendations from cryptographers monitoring the literature

concluded that multiplicative groups were higher risk than elliptic

curves, and that fields with extra automorphisms were higher risk than

prime fields. (Of course, people desperate to paint the opposite picture

could always come up with reasons: the RSA company was still claiming

that elliptic curves hadn't been studied, for example, and there was

also an interesting story regarding "medium" fields.) Unsurprisingly, we

saw big further security losses after that for multiplicative groups

(for some highlights see https://blog.cr.yp.to/20161030-pqnist.html),

especially in the case of fields with extra automorphisms.

What does this tell us about lattices vs. McEliece? Should we be on the

alert for percentage losses of asymptotic security levels (many losses

for lattices, nothing for McEliece)? Larger losses (yes for lattices, no

for McEliece)? Unnecessary automorphisms? All of the above?

If there's a rational argument that a particular risk indicator _isn't_

helpful, the argument should be published for further analysis. Posting

one message after another labeling a risk indicator as "irrelevant hype"

doesn't move the analysis forward, and seems hard to reconcile with the

thoroughly documented discrete-log history.

> yet compare it with asymptotic improvements to the exact short vector problem

> even though lattice schemes depend on different variants of the problem.

2010 than today translates directly into the security levels of

(otherwise unbroken) lattice systems being asymptotically 42% higher in

2010 than today. Basically, the lattice attacks are bottlenecked by BKZ,

and BKZ is bottlenecked by a relatively short series of expensive

shortest-vector searches in a lower dimension (beta, the "block size"),

so if the SVP solution loses P% of its asymptotic bits of security then

the whole attack loses P% of its asymptotic bits of security.

(BKZ is one of many algorithms, and ongoing research indicates that SVP

isn't really the right interface, but these issues don't seem to affect

the exponent. A bigger issue is hybrid attacks: my message on that topic

in July outlines a way to remove another percentage from the security

level for any system with errors in, say, {-3,...,3}, so the loss for

lattice security compared to 2010 will be even larger than the SVP loss.

Some hybrid-attack details still need verification, and the percentage

hasn't been calculated yet.)

For comparison, consider https://eprint.iacr.org/2017/1139.pdf reporting

exponent 0.0465 for half-distance decoding, where Prange's algorithm

from 51 years earier was exponent 0.0576; see the paper for a survey of

intermediate results. This asymptotic 24% change corresponds to an

asymptotic 0% change in the asymptotic security level of McEliece's

system. The situation for McEliece's system has always been that the

asymptotic bits of security come primarily from a gigantic combinatorial

search through information sets.

To summarize: The graph in https://video.cr.yp.to/2020/0813/video.html

correctly compares asymptotic attack costs, as the label indicates. In

particular, the asymptotic security level of otherwise unbroken lattice

cryptosystems has dropped as dramatically as shown in the graph. (Also,

recent work, once quantified, should produce further lattice losses.)

> > The section goes on to summarize what the literature says about concrete

> > cost analyses, and ends with the AES-256 comparison reviewed above.

> The rest of section 8.2 is quoted above in its entirety. The literature

> says much more about concrete cost analyses than this "summary".

something." I agree that the literature says much more than the summary

does.

> > Evaluating such a claim would require a clear, stable definition of

> > "Category 3". If NIST settles on a definition to be used in NISTPQC then

> > submitters will be able to make "category" assignments accordingly.

> > See above regarding the actual costs of accessing 6GB of RAM.

> No. The definition of category 3 is stable.

"Category 3". NIST has given a pseudo-definition---something that looks

at first glance like a definition but that turns out to rely on

undefined metrics. How are submitters supposed to evaluate costs in cost

metrics that haven't been defined? See above.

> 2^189.2 classical gates does not mean 2^189.2 random memory

> accesses. Not all gates involve a memory access. Not all memory accesses

> are to high-latency memory. Not all high-latency memory accesses are

> random.

in how often they're randomly accessing memory. However, all of the

attacks we're discussing here are bottlenecked by random access.

> My point is that you can't just say that everything will be fine

> because memory. You actually need to do the analysis.

already includes a detailed cost analysis of Stern's attack with many

further optimizations. The inner loop consists of picking up a number

that looks random, xor'ing it into an index i, randomly accessing x[i],

and going back to the top of the loop.

As the submission says, this attack is "bottlenecked by random access".

Newer attacks are similarly memory-bound, using even more memory.

The operation counts are clear (for this submission; again, there are

other submissions where attacks are much harder to analyze). The notion

that there's some missing analysis of how many RAM accesses there are is

simply not true. What's missing is a definition from NIST of how NISTPQC

assigns costs to x[i] and other operations.

> > There's also a tremendous amount to say about the dangers of unnecessary

> > complexity. One aspect of this is specifying more parameter sets than

> > necessary. Of course, NIST's request for lower-security parameter sets

> > was an offer that Classic McEliece couldn't refuse, but we continue to

> > recommend the 2^256 parameter sets.

> I assume that if Classic McEliece is chosen for standardization you will

> be withdrawing the parameter sets that you don't recommend for use.

the right to make its own parameter-set decisions after it consults with

the submitters and the community. See the call for proposals. Obviously

submission teams and other interested parties will provide feedback to

NIST regarding the wisdom of their decisions.

> > One can easily write down even larger parameter sets. Also, the ISD

> > attack analyses in the literature are precise and straightforward, and

> > it should be easy to evaluate the security of any desired parameters in

> > any well-defined metric. However, the right order of events is, first,

> > for NIST to fully define the cost metric to be used for "categories",

> > and, second, for all submission teams to evaluate costs in this metric.

> No. The right order of events is, first, submit parameter sets with security

> estimates in a metric that you think is practically relevant and, second,

> let eveyone else spend three rounds of the NIST process evaluating those

> claims. That's what most of other submissions have done.

other submissions have in fact done.

The round-3 Kyber submission has an analysis concluding that attacks

against round-3 Kyber-512 are probably---accounting for the "known

unknowns"---somewhere between 2^135 and 2^167 "gates", a remarkably wide

range for a cryptosystem that claims its security is well understood.

The submission hopes that the "known unknowns" will be pinned down, and

that the final answer will be above 2^143. The submission also claims to

use a "conservative lower bound" on attack costs. Can NIST see whether

this "bound" is above or below 2^143? Can anyone else see this? Nope.

Could someone have been reviewing the analysis since the first round of

the NIST process? No. The parameter set has changed. The analysis has

changed. The problem that cryptanalysts are being asked to study has

changed! Kyber-512 used to claim that its security was based on the

MLWE problem, but, as far as I can tell, this has been dropped in round

3. Concretely, my understanding is that if someone claims an attack

against round-3 Kyber-512 by showing that MLWE for that size is easier

to break than AES-128, the official response will be that the round-3

submission does _not_ claim MLWE attacks are so hard---its security

levels are claimed only for direct IND-CPA/IND-CCA2 attacks. (I've

asked for confirmation of this in a separate thread on Kyber.)

So much for spending "three rounds of the NIST process evaluating those

claims", or, in the words of the call for proposals, "maturity of

analysis". Now let's move on to the "metric" question.

Let's say Caroline the Cryptanalyst is looking at round-3 Kyber-512.

What would be required, structurally, for Caroline to show that round-3

Kyber-512 is broken? Let's assume for simplicity that Kyber "gates" are

clearly defined and that AES-128 key search needs 2^143 Kyber "gates".

Here are two possibilities for Caroline:

* Strategy 1: Show that the best attacks considered in the submission

actually use fewer than 2^143 "gates". This is compatible with the

2^135...2^167 range claimed in the submission.

* Strategy 2: Show that attacks not considered in the submission,

such as hybrid attacks, use fewer than 2^143 "gates".

These would both be breaks, since they'd violate the minimum for

category 1, right? Nope. The submission protects itself against this by

leaving wiggle room in the cost metric. Here's the critical quote:

We do not think that [2^135] would be catastrophic, in particular

given the massive memory requirements ...

The submission is saying that 2^135 "gates" wouldn't actually be a

problem (assuming the attacks use tons of memory, which most, although

not all, lattice attacks do), since of course memory costs much more

than a "gate" count indicates.

Should Caroline expect NIST to downplay the memory requirements and

accept the attack as a break? Why?

NIST has repeatedly asked for "confidence" that the NISTPQC security

requirements are reached. I challenged the security of round-2

Kyber-512, pointing out errors in the security analysis and pointing out

reasons to believe that known attacks don't use more "gates" than

AES-128. Did NIST claim "confidence" that the attacks actually use more

"gates" than AES-128? No. Did NIST kick out that parameter set? No.

Instead NIST initiated a "preliminary", open-ended project to start

incorporating overheads into cost metrics.

Does this sound like NIST automatically kicking out any parameter set

when there's an attack below 2^143 "gates", with free RAM "gates"? No.

It sounds like NIST shifting the boundaries of "category 1" as far as

necessary to declare that Kyber-512 isn't broken, by playing games with

the cost metric. Maybe NIST has another reason for not specifying a

metric, but in any case the moving target deters cryptanalysis.

Part of recognizing security risks is recognizing the procedures that

are actually being used for designing, evaluating, and standardizing

cryptography. Cryptanalytic work is a critical part of this, and NISTPQC

is raising far more security questions than cryptanalysts can answer in

the time available. It's not as if the attack picture was stable in

2020, so why should we expect it to suddenly be stable in 2021, or in

2025? This denial-of-service attack against cryptanalysts starts from

the inherent cryptanalytic complexity of post-quantum cryptography, but

the attack is exacerbated by NIST's continued failure to clarify its

pseudo-definition of the minimum allowed NISTPQC security level.

---Dan

Dec 11, 2020, 8:29:27 PM12/11/20

to pqc-forum, pqc-co...@nist.gov

Dan Bernstein (speaking for himself) wrote:

> As far as I can tell, the questions specific to Classic McEliece

> have already been answered.

> have already been answered.

In the formal sense that you responded, I guess. It's like asking

the time and being told that it depends on the observer's frame of

reference. Thanks Einstein. It is an answer but I still don't know

if I'm late for class.

the time and being told that it depends on the observer's frame of

reference. Thanks Einstein. It is an answer but I still don't know

if I'm late for class.

Kirk Fleming wrote:

>>>> I am filing this comment to request that the Classic McEliece

>>>> submitters justify the claimed security categories for their

>>>> parameters.

>>>> I am filing this comment to request that the Classic McEliece

>>>> submitters justify the claimed security categories for their

>>>> parameters.

Dan Bernstein (on behalf of the Classic McEliece team) wrote:

>>> Can you please clarify how you think that the numbers already

>>> in the literature don't already justify the assignments in the

>>> submission?

>>> Can you please clarify how you think that the numbers already

>>> in the literature don't already justify the assignments in the

>>> submission?

Kirk Fleming wrote:

>> Let's try an easier question. Can you point to the paper cited

>> Let's try an easier question. Can you point to the paper cited

>> by your submission that gives numbers for the mceliece-4608-096

>> parameter set which clearly justify its assignment as category 3?

Dan Bernstein (speaking for himself) wrote:

> NIST has failed to define the metrics to be used, so there's no

> way for anyone to be sure what's included in "category 3".

> NIST has failed to define the metrics to be used, so there's no

> way for anyone to be sure what's included in "category 3".

To be absolutely clear, you are stating that the numbers in the

literature already justify the assignments in the submission but

that you can't point to any specific numbers because there is no

literature already justify the assignments in the submission but

that you can't point to any specific numbers because there is no

way for anyone to be sure what's included in each category.

Okaaaay.

Let's try an even easier question. Why mceliece-4608-96?

The mceliece-6960-119 parameter set was taken from "Attacking and

defending the McEliece cryptosystem". You could have chosen the

mceliece-4624-95 parameter set from the same paper. Instead you

chose a parameter set which had "optimal security within 2^19 bytes

if n and t are required to be multiples of 32". (mceliece-4624-95

was chosen to be optimal without the restrictions on n and t)

defending the McEliece cryptosystem". You could have chosen the

mceliece-4624-95 parameter set from the same paper. Instead you

chose a parameter set which had "optimal security within 2^19 bytes

if n and t are required to be multiples of 32". (mceliece-4624-95

was chosen to be optimal without the restrictions on n and t)

mceliece-5856-64 and mceliece-4160-128 are also within 2^19 bytes.

Both of these have smaller public keys than mceliece-4608-96 and

mceliece-5856-64 has smaller ciphertexts. Please explain the metric

you used to decide which of the three parameter sets had "optimal

security".

Both of these have smaller public keys than mceliece-4608-96 and

mceliece-5856-64 has smaller ciphertexts. Please explain the metric

you used to decide which of the three parameter sets had "optimal

security".

Kirk

Dec 30, 2020, 7:49:37 PM12/30/20

to pqc-forum, pqc-co...@nist.gov

I promise this is my last message on the topic. Dan has

stopped responding but I wanted to make one final point.

stopped responding but I wanted to make one final point.

Dan Bernstein (on behalf of the Classic McEliece team) asked:

> Can you please clarify how you think that the numbers already

> in the literature don't already justify the assignments in the

> submission?

The only concrete security estimate given in the entire Classic

McEliece submission is:

McEliece submission is:

"[Bernstein, Lange and Peters. 2008] reported that its attack

uses 2^266.94 bit operations to break the (13,6960,119)

parameter set."

This is not true. Bernstein, Lange and Peters actually say:

"For keys limited to 2^16, 2^17, 2^18, 2^19, 2^20 bytes, we

propose Goppa codes of lengths 1744, 2480, 3408, 4624, 6960

and degrees 35, 45, 67, 95, 119 respectively, with 36, 46,

68, 97, 121 errors added by the sender. These codes achieve

security levels 84.88, 107.41, 147.94, 191.18, 266.94 against

our attack."

propose Goppa codes of lengths 1744, 2480, 3408, 4624, 6960

and degrees 35, 45, 67, 95, 119 respectively, with 36, 46,

68, 97, 121 errors added by the sender. These codes achieve

security levels 84.88, 107.41, 147.94, 191.18, 266.94 against

our attack."

The 2^266.94 estimate is not for mceliece-6960-119. It's for a

different parameter set that uses list decoding to increase the

number of errors that can be corrected.

different parameter set that uses list decoding to increase the

number of errors that can be corrected.

The submission's lack of any serious security analysis for the

five parameter sets being proposed should be cause for concern.

The team's refusal to provide any justification for their claimed

security categories when challenged should be disqualifying.

five parameter sets being proposed should be cause for concern.

The team's refusal to provide any justification for their claimed

security categories when challenged should be disqualifying.

Kirk

May 21, 2021, 12:55:12 AM5/21/21

to Kirk Fleming, pqc-forum, pqc-comments

Was there ever any response (from the Classic McEliece team or otherwise) to this comment from 30 Dec 2020? More specifically:

On Wed, Dec 30, 2020 at 7:49 PM Kirk Fleming <kpfl...@mail.com> wrote:

The only concrete security estimate given in the entire ClassicMcEliece submission is:"[Bernstein, Lange and Peters. 2008] reported that its attack

uses 2^266.94 bit operations to break the (13,6960,119)

parameter set."This is not true. Bernstein, Lange and Peters actually say:"For keys limited to 2^16, 2^17, 2^18, 2^19, 2^20 bytes, we

propose Goppa codes of lengths 1744, 2480, 3408, 4624, 6960

and degrees 35, 45, 67, 95, 119 respectively, with 36, 46,

68, 97, 121 errors added by the sender. These codes achieve

security levels 84.88, 107.41, 147.94, 191.18, 266.94 against

our attack."The 2^266.94 estimate is not for mceliece-6960-119. It's for a

different parameter set that uses list decoding to increase the

number of errors that can be corrected.

Does anyone know what effect this has on security?

The submission's lack of any serious security analysis for thefive parameter sets being proposed should be cause for concern.

The team's refusal to provide any justification for their claimed

security categories when challenged should be disqualifying.

Has there since been any such security analysis or justification provided for the proposed parameter sets?

Sincerely yours in cryptography,

Chris

May 21, 2021, 5:09:35 AM5/21/21

to Christopher J Peikert, Kirk Fleming, pqc-forum, pqc-comments

On 5/21/21 6:54 AM, Christopher J Peikert wrote:

> On Wed, Dec 30, 2020 at 7:49 PM Kirk Fleming <kpfl...@mail.com> wrote:

>

>> The only concrete security estimate given in the entire Classic

>> McEliece submission is:

>>

>> "[Bernstein, Lange and Peters. 2008] reported that its attack

>> uses 2^266.94 bit operations to break the (13,6960,119)

>> parameter set."

>>

>> This is not true. Bernstein, Lange and Peters actually say:

>>

>> "For keys limited to 2^16, 2^17, 2^18, 2^19, 2^20 bytes, we

>> propose Goppa codes of lengths 1744, 2480, 3408, 4624, 6960

>> and degrees 35, 45, 67, 95, 119 respectively, with 36, 46,

>> 68, 97, 121 errors added by the sender. These codes achieve

>> security levels 84.88, 107.41, 147.94, 191.18, 266.94 against

>> our attack."

>>

>> The 2^266.94 estimate is not for mceliece-6960-119. It's for a

>> different parameter set that uses list decoding to increase the

>> number of errors that can be corrected.

>>

>

> Does anyone know what effect this has on security?

According to the analysis in the quoted paper [Bernstein, Lange and
> On Wed, Dec 30, 2020 at 7:49 PM Kirk Fleming <kpfl...@mail.com> wrote:

>

>> The only concrete security estimate given in the entire Classic

>> McEliece submission is:

>>

>> "[Bernstein, Lange and Peters. 2008] reported that its attack

>> uses 2^266.94 bit operations to break the (13,6960,119)

>> parameter set."

>>

>> This is not true. Bernstein, Lange and Peters actually say:

>>

>> "For keys limited to 2^16, 2^17, 2^18, 2^19, 2^20 bytes, we

>> propose Goppa codes of lengths 1744, 2480, 3408, 4624, 6960

>> and degrees 35, 45, 67, 95, 119 respectively, with 36, 46,

>> 68, 97, 121 errors added by the sender. These codes achieve

>> security levels 84.88, 107.41, 147.94, 191.18, 266.94 against

>> our attack."

>>

>> The 2^266.94 estimate is not for mceliece-6960-119. It's for a

>> different parameter set that uses list decoding to increase the

>> number of errors that can be corrected.

>>

>

> Does anyone know what effect this has on security?

Peters. 2008], the difference in the security level when going from 121

errors to 119 errors while keeping the other parameters unchanged is

about 4.

Ruben

(speaking for myself)

Jun 21, 2021, 7:17:11 PM6/21/21

to Ruben Niederhagen, Christopher J Peikert, pqc-forum, pqc-comments

In the Classic McEliece presentation at the recent NIST

post-quantum conference Tanja Lange gave estimates for

the security of mceliece-1024-50 and mceliece-1223-23

using the Markov chain analysis from BLP2010. She failed

to do the same for the parameter sets proposed in the

Round 3 submission.

post-quantum conference Tanja Lange gave estimates for

the security of mceliece-1024-50 and mceliece-1223-23

using the Markov chain analysis from BLP2010. She failed

to do the same for the parameter sets proposed in the

Round 3 submission.

For completeness, a limited search using the isdfq code

by Peters (github.com/christianepeters/isdfq) gave:

by Peters (github.com/christianepeters/isdfq) gave:

Parameter Set Bit Ops

mceliece-3488-064 2^145.2

mceliece-4608-096 2^187.0

mceliece-6688-128 2^261.9

mceliece-6960-119 2^262.3

mceliece-8192-128 2^297.1

mceliece-3488-064 2^145.2

mceliece-4608-096 2^187.0

mceliece-6688-128 2^261.9

mceliece-6960-119 2^262.3

mceliece-8192-128 2^297.1

The 2^262.3 estimate for mceliece-6960-119 corresponds

to the 4-bit reduction in security cited by Ruben.

to the 4-bit reduction in security cited by Ruben.

Kirk

Jun 21, 2021, 10:42:53 PM6/21/21

to Kirk Fleming, pqc-forum, pqc-comments

For the record: in nearly six months there has been no response from the Classic McEliece team, nor update to the specification, to this:

On Wed, Dec 30, 2020 at 7:49 PM Kirk Fleming <kpfl...@mail.com> wrote:

Dan Bernstein (on behalf of the Classic McEliece team) asked:> Can you please clarify how you think that the numbers already

> in the literature don't already justify the assignments in the

> submission?The only concrete security estimate given in the entire Classic

McEliece submission is:"[Bernstein, Lange and Peters. 2008] reported that its attack

uses 2^266.94 bit operations to break the (13,6960,119)

parameter set."This is not true. Bernstein, Lange and Peters actually say:"For keys limited to 2^16, 2^17, 2^18, 2^19, 2^20 bytes, we

propose Goppa codes of lengths 1744, 2480, 3408, 4624, 6960

and degrees 35, 45, 67, 95, 119 respectively, with 36, 46,

68, 97, 121 errors added by the sender. These codes achieve

security levels 84.88, 107.41, 147.94, 191.18, 266.94 against

our attack."The 2^266.94 estimate is not for mceliece-6960-119. It's for a

different parameter set that uses list decoding to increase the

number of errors that can be corrected.

A few hours ago, Dan Bernstein (one of the Classic McEliece submitters) wrote:

"Security claims regarding NISTPQC submissions have to be stated clearly for public review. When a reviewer disproves a security claim, the claim has to be withdrawn, and the withdrawal has to properly credit the disproof. Any other approach disincentivizes security review."

Shortly thereafter, Kirk Fleming wrote: "a limited search using the isdfq code by Peters (github.com/christianepeters/isdfq)" gave an estimate for mceliece-6960-119 of 2^262.3 bit operations. This is less than the 2^266.94 bit operations claimed in the submission.

Is this a "disproof" of the submission's security claim? If the code's output represents an actual attack, then yes, it is a disproof.

In any case, the submission certainly contains a disproved *description of* BLP08's clear security claim. By the same rationale from the above quote (do not disincentivize security review), the same remedy is required: withdraw the claim and properly credit the disproof.

Jun 22, 2021, 2:10:28 AM6/22/21

to Kirk Fleming, pqc-forum, pqc-comments

Hi Kirk!

On 6/22/21 1:16 AM, Kirk Fleming wrote:

> In the Classic McEliece presentation at the recent NIST

> post-quantum conference Tanja Lange gave estimates for

> the security of mceliece-1024-50 and mceliece-1223-23

> using the Markov chain analysis from BLP2010. She failed

> to do the same for the parameter sets proposed in the

> Round 3 submission.

Tanja did, however, on the very same slide link to the paper "A Finite

Regime Analysis of Information Set Decoding Algorithms" by Baldi,

Barenghi, Chiaraluce, Pelosi, and Santini, advertising that it provides

"bit operations & memory for all Classic McEliece parameters".

> For completeness, a limited search using the isdfq code

> by Peters (github.com/christianepeters/isdfq) gave:

> Parameter Set Bit Ops

> mceliece-3488-064 2^145.2

> mceliece-4608-096 2^187.0

> mceliece-6688-128 2^261.9

> mceliece-6960-119 2^262.3

> mceliece-8192-128 2^297.1

> The 2^262.3 estimate for mceliece-6960-119 corresponds

> to the 4-bit reduction in security cited by Ruben.

Thank you for doing the math!

I am happy that your calculations fit well to the proposed security

levels of the Classic McEliece submission.

Ruben

(speaking for myself)

On 6/22/21 1:16 AM, Kirk Fleming wrote:

> In the Classic McEliece presentation at the recent NIST

> post-quantum conference Tanja Lange gave estimates for

> the security of mceliece-1024-50 and mceliece-1223-23

> using the Markov chain analysis from BLP2010. She failed

> to do the same for the parameter sets proposed in the

> Round 3 submission.

Regime Analysis of Information Set Decoding Algorithms" by Baldi,

Barenghi, Chiaraluce, Pelosi, and Santini, advertising that it provides

"bit operations & memory for all Classic McEliece parameters".

> For completeness, a limited search using the isdfq code

> by Peters (github.com/christianepeters/isdfq) gave:

> Parameter Set Bit Ops

> mceliece-3488-064 2^145.2

> mceliece-4608-096 2^187.0

> mceliece-6688-128 2^261.9

> mceliece-6960-119 2^262.3

> mceliece-8192-128 2^297.1

> The 2^262.3 estimate for mceliece-6960-119 corresponds

> to the 4-bit reduction in security cited by Ruben.

I am happy that your calculations fit well to the proposed security

levels of the Classic McEliece submission.

Ruben

(speaking for myself)

Jun 22, 2021, 5:52:52 AM6/22/21

to pqc-forum, pqc-comments

On 6/22/21 4:42 AM, Christopher J Peikert wrote:

> A few hours ago, Dan Bernstein (one of the Classic McEliece submitters)

> wrote

> <https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/Yx0wZuZP6ag/m/rjmo2mYeCQAJ>
> A few hours ago, Dan Bernstein (one of the Classic McEliece submitters)

> wrote

> :

>

> "Security claims regarding NISTPQC submissions have to be stated clearly

> for public review. When a reviewer disproves a security claim, the claim

> has to be withdrawn, and the withdrawal has to properly credit the

> disproof. Any other approach disincentivizes security review."

I fail to see a point in the fact that Dan Bernstein is one of the
> "Security claims regarding NISTPQC submissions have to be stated clearly

> for public review. When a reviewer disproves a security claim, the claim

> has to be withdrawn, and the withdrawal has to properly credit the

> disproof. Any other approach disincentivizes security review."

Classic McEliece submitters (as am I); quite clearly his individual

statements are not a joint position of the Classic McEliece submitters.

> Shortly thereafter, Kirk Fleming wrote

> "a limited search using the isdfq code by Peters (

> github.com/christianepeters/isdfq)" gave an estimate for mceliece-6960-119

> of 2^262.3 bit operations. This is less than the 2^266.94 bit operations

> claimed in the submission.

>

> Is this a "disproof" of the submission's security claim? If the code's

> output represents an actual attack, then yes, it is a disproof.

>

> In any case, the submission certainly contains a disproved *description of*
> github.com/christianepeters/isdfq)" gave an estimate for mceliece-6960-119

> of 2^262.3 bit operations. This is less than the 2^266.94 bit operations

> claimed in the submission.

>

> Is this a "disproof" of the submission's security claim? If the code's

> output represents an actual attack, then yes, it is a disproof.

>

> BLP08's clear security claim. By the same rationale from the above quote

> (do not disincentivize security review), the same remedy is required:

> withdraw the claim and properly credit the disproof.

Just for clarification and to avoid misunderstandings by readers of this
> (do not disincentivize security review), the same remedy is required:

> withdraw the claim and properly credit the disproof.

thread: The Classic McEliece submission proposes the mceliece-6960-119

parameter set (13,6960,119) for Category 5. The estimated cost of

2^262.3 bit operations according to this specific reference as computed

by Kirk Fleming are well within Category 5 - probably without any

dispute also by Christopher Peikert.

Chris refers to a different (though interesting) discussion in the

thread "Looseness, security risks, and LWR vs. LWE".

Best regards

Ruben (speaking for myself)

Jun 22, 2021, 6:15:07 AM6/22/21

to Ruben Niederhagen, pqc-comments, pqc-forum

On Tue, Jun 22, 2021 at 5:52 AM Ruben Niederhagen <ru...@polycephaly.org> wrote:

On 6/22/21 4:42 AM, Christopher J Peikert wrote:

> A few hours ago, Dan Bernstein (one of the Classic McEliece submitters)

> wrote

> <https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/Yx0wZuZP6ag/m/rjmo2mYeCQAJ>

> :

>

> "Security claims regarding NISTPQC submissions have to be stated clearly

> for public review. When a reviewer disproves a security claim, the claim

> has to be withdrawn, and the withdrawal has to properly credit the

> disproof. Any other approach disincentivizes security review."

I fail to see a point in the fact that Dan Bernstein is one of the

Classic McEliece submitters (as am I); quite clearly his individual

statements are not a joint position of the Classic McEliece submitters.

Fair enough. My message can be taken as a request to know what *is* the joint position of the Classic McEliece team on the highlighted discrepancy, which has been pending for almost six months, and seems easy to resolve. Moreover, there are several other substantive but unaddressed points made by Kirk Fleming in this thread that deserve some response.

Just for clarification and to avoid misunderstandings by readers of this

thread: The Classic McEliece submission proposes the mceliece-6960-119

parameter set (13,6960,119) for Category 5. The estimated cost of

2^262.3 bit operations according to this specific reference as computed

by Kirk Fleming are well within Category 5 - probably without any

dispute also by Christopher Peikert.

Yes, I don’t dispute that this is within Category 5. My prior message concerned the more precise security claims.

Jun 22, 2021, 8:18:54 PM6/22/21

to Ruben Niederhagen, pqc-forum, pqc-comments

Hi Ruben,

Ruben Niederhagen wrote:

> Thank you for doing the math!

I did very little. Thank Christiane Peters for writing the code!

> I am happy that your calculations fit well to the proposed security

> levels of the Classic McEliece submission.

I disagree. I still believe mceliece-4608-096 is less secure than it

needs to be for Category 3.

This should not be a surprise when you realise the parameter sets

were chosen based on the size of the public key. The loose rule of

thumb is that increasing the size of the public key by a factor of 4

doubles the security so if a 2^18 byte public key is equivalent to

AES-128 then you might reasonably expect a 2^20 byte public key

to be equivalent to AES-256. However, doubling the size of the public

key only increases the security by a factor of Sqrt(2) so you should

only expect a 2^19 byte public key to be equivalent to AES-181.

This is not a proof, of course, but it does suggest why the estimates

from the Markov chain analysis are lower for mceliece-4608-096,

relatively speaking, than the other parameter sets.

Kirk

Jun 23, 2021, 7:24:50 AM6/23/21

to Kirk Fleming, pqc-forum, pqc-comments

Hi Kirk!

On 6/23/21 2:18 AM, Kirk Fleming wrote:

parameters in code-based crypto that are contributing to the key size on

the one hand and to the attack cost on the other hand; there are many

parameter sets with a public key size of, e.g., around 512 kB but

varying attack costs - as there are many parameter sets for a cost of

around 2^192 operations but with varying key sizes.

For example, the (not proposed) parameter sets (m=13, n=4750, t=90) and

(m=13, n=7900, t=44) have a very similar public key size of about 512 kB

- but significantly different attack costs.

> However, doubling the size of the public key only increases the

> security by a factor of Sqrt(2) so you should only expect a 2^19 byte

> public key to be equivalent to AES-181. This is not a proof, of

> course, but it does suggest why the estimates from the Markov chain

> analysis are lower for mceliece-4608-096, relatively speaking, than

> the other parameter sets.

Well, applying well established formulas to assess the cost of an attack

for specific parameter sets (as you did using Christiane's script)

sounds good enough to me; I hope you are not suggesting that the cost

analysis itself is flawed.

Picking parameter sets according to key sizes is not uncommon;

recommended RSA key sizes tend to be a power of two or at least a

multiple of 1024 bit. Also common security levels themselves like 128,

192, and 256 (also 160, 224, 320, ...) are somewhat arbitrary and

probably indeed were motivated by the corresponding key sizes of AES,

ECC, etc. However, you are of course entitled to your own opinion on

whether or not parameter sets (as long as they provide the desired

security level) should be picked targeting a specific key size.

Ruben

(speaking for myself)

On 6/23/21 2:18 AM, Kirk Fleming wrote:

> Ruben Niederhagen wrote:

>> I am happy that your calculations fit well to the proposed

>> security levels of the Classic McEliece submission.

>

> I disagree. I still believe mceliece-4608-096 is less secure than it

> needs to be for Category 3.

>

> This should not be a surprise when you realise the parameter sets

> were chosen based on the size of the public key. The loose rule of

> thumb is that increasing the size of the public key by a factor of 4

> doubles the security so if a 2^18 byte public key is equivalent to

> AES-128 then you might reasonably expect a 2^20 byte public key to be

> equivalent to AES-256.

How do you get to this rule of thumb? Be aware that there are several
>> I am happy that your calculations fit well to the proposed

>> security levels of the Classic McEliece submission.

>

> I disagree. I still believe mceliece-4608-096 is less secure than it

> needs to be for Category 3.

>

> This should not be a surprise when you realise the parameter sets

> were chosen based on the size of the public key. The loose rule of

> thumb is that increasing the size of the public key by a factor of 4

> doubles the security so if a 2^18 byte public key is equivalent to

> AES-128 then you might reasonably expect a 2^20 byte public key to be

> equivalent to AES-256.

parameters in code-based crypto that are contributing to the key size on

the one hand and to the attack cost on the other hand; there are many

parameter sets with a public key size of, e.g., around 512 kB but

varying attack costs - as there are many parameter sets for a cost of

around 2^192 operations but with varying key sizes.

For example, the (not proposed) parameter sets (m=13, n=4750, t=90) and

(m=13, n=7900, t=44) have a very similar public key size of about 512 kB

- but significantly different attack costs.

> However, doubling the size of the public key only increases the

> security by a factor of Sqrt(2) so you should only expect a 2^19 byte

> public key to be equivalent to AES-181. This is not a proof, of

> course, but it does suggest why the estimates from the Markov chain

> analysis are lower for mceliece-4608-096, relatively speaking, than

> the other parameter sets.

for specific parameter sets (as you did using Christiane's script)

sounds good enough to me; I hope you are not suggesting that the cost

analysis itself is flawed.

Picking parameter sets according to key sizes is not uncommon;

recommended RSA key sizes tend to be a power of two or at least a

multiple of 1024 bit. Also common security levels themselves like 128,

192, and 256 (also 160, 224, 320, ...) are somewhat arbitrary and

probably indeed were motivated by the corresponding key sizes of AES,

ECC, etc. However, you are of course entitled to your own opinion on

whether or not parameter sets (as long as they provide the desired

security level) should be picked targeting a specific key size.

Ruben

(speaking for myself)

Jun 23, 2021, 8:16:55 AM6/23/21

to Ruben Niederhagen, pqc-forum, pqc-comments

Hi Ruben,

Ruben Niederhagen wrote:

>>> I am happy that your calculations fit well to the proposed

>>> security levels of the Classic McEliece submission.

>>> I am happy that your calculations fit well to the proposed

>>> security levels of the Classic McEliece submission.

Kirk Fleming wrote:

>> I disagree. I still believe mceliece-4608-096 is less secure than it

>> needs to be for Category 3.

>> This should not be a surprise when you realise the parameter sets

>> were chosen based on the size of the public key. The loose rule of

>> thumb is that increasing the size of the public key by a factor of 4

>> doubles the security so if a 2^18 byte public key is equivalent to

>> AES-128 then you might reasonably expect a 2^20 byte public key to be

>> equivalent to AES-256.

>> I disagree. I still believe mceliece-4608-096 is less secure than it

>> needs to be for Category 3.

>> This should not be a surprise when you realise the parameter sets

>> were chosen based on the size of the public key. The loose rule of

>> thumb is that increasing the size of the public key by a factor of 4

>> doubles the security so if a 2^18 byte public key is equivalent to

>> AES-128 then you might reasonably expect a 2^20 byte public key to be

>> equivalent to AES-256.

Ruben replied:

> How do you get to this rule of thumb?

See for example Section 8.1 of the Classic McEliece submission:

"Security level 2^b thus uses key size (c_0+o(1))b^2 (lg b)^2

where c0 = R/(1−R)(lg(1−R))^2 ."

If we ignore log factors, when R is fixed the size of the public key

is asymptotically quadratic in the security parameter.

> Be aware that there are several

> parameters in code-based crypto that are contributing to the key size on

> the one hand and to the attack cost on the other hand; there are many

> parameter sets with a public key size of, e.g., around 512 kB but

> varying attack costs - as there are many parameter sets for a cost of

> around 2^192 operations but with varying key sizes.

> parameters in code-based crypto that are contributing to the key size on

> the one hand and to the attack cost on the other hand; there are many

> parameter sets with a public key size of, e.g., around 512 kB but

> varying attack costs - as there are many parameter sets for a cost of

> around 2^192 operations but with varying key sizes.

> For example, the (not proposed) parameter sets (m=13, n=4750, t=90) and

> (m=13, n=7900, t=44) have a very similar public key size of about 512 kB

> - but significantly different attack costs.

> (m=13, n=7900, t=44) have a very similar public key size of about 512 kB

> - but significantly different attack costs.

Of course it is possible to choose parameters with similarly sized

public keys but varying security levels. However, the Classic McEliece

parameter sets were chosen to maximize security given an upper bound

on the public key size. This will tend to give parameter sets with similar

values for R.

>> However, doubling the size of the public key only increases the

>> security by a factor of Sqrt(2) so you should only expect a 2^19 byte

>> public key to be equivalent to AES-181. This is not a proof, of

>> course, but it does suggest why the estimates from the Markov chain

>> analysis are lower for mceliece-4608-096, relatively speaking, than

>> the other parameter sets.

> Well, applying well established formulas to assess the cost of an attack

> for specific parameter sets (as you did using Christiane's script)

> sounds good enough to me; I hope you are not suggesting that the cost

> analysis itself is flawed.

> for specific parameter sets (as you did using Christiane's script)

> sounds good enough to me; I hope you are not suggesting that the cost

> analysis itself is flawed.

No, I'm not suggesting that the analysis is flawed. I'm suggesting that the

analysis doesn't support the claimed security category for this parameter

set.

> Picking parameter sets according to key sizes is not uncommon;

> recommended RSA key sizes tend to be a power of two or at least a

> multiple of 1024 bit. Also common security levels themselves like 128,

> 192, and 256 (also 160, 224, 320, ...) are somewhat arbitrary and

> probably indeed were motivated by the corresponding key sizes of AES,

> ECC, etc. However, you are of course entitled to your own opinion on

> whether or not parameter sets (as long as they provide the desired

> security level) should be picked targeting a specific key size.

I have no objection to choosing parameter sets that target specific key sizes

provided that they also meet the claimed security categories.

Kirk

Jun 23, 2021, 10:17:11 AM6/23/21

to Kirk Fleming, pqc-forum, pqc-comments

Hi Kirk!

On 6/23/21 2:16 PM, Kirk Fleming wrote:> Ruben replied:

parameter sets - as illustrated by my example parameter sets (m=13,

n=4750, t=90) and (m=13, n=7900, t=44).

>> Well, applying well established formulas to assess the cost of an attack

>> for specific parameter sets (as you did using Christiane's script)

>> sounds good enough to me; I hope you are not suggesting that the cost

>> analysis itself is flawed.

>

> No, I'm not suggesting that the analysis is flawed. I'm suggesting that the

> analysis doesn't support the claimed security category for this parameter

> set.

I am confused; the very attack costs that you reported in one of your

earlier mails for the Category-3 parameter sets seems to support the

claimed security category. Could you be more specific on why you

consider this not to be the case?

I'd suggest to take this offline so we don't continue to 'spam' the

mailing list; this discussion might be off-topic for many readers. Maybe

we can just report back with the results of our offline discussion

afterwards.

Ruben

On 6/23/21 2:16 PM, Kirk Fleming wrote:> Ruben replied:

>> How do you get to this rule of thumb?

>

> See for example Section 8.1 of the Classic McEliece submission: > "Security level 2^b thus uses key size (c_0+o(1))b^2 (lg b)^2

> where c0 = R/(1−R)(lg(1−R))^2 ."

> If we ignore log factors, when R is fixed the size of the public key

> is asymptotically quadratic in the security parameter.

Well - an asymptotic rule of thumb might be quite a bit off for concrete
>

> See for example Section 8.1 of the Classic McEliece submission: > "Security level 2^b thus uses key size (c_0+o(1))b^2 (lg b)^2

> where c0 = R/(1−R)(lg(1−R))^2 ."

> If we ignore log factors, when R is fixed the size of the public key

> is asymptotically quadratic in the security parameter.

parameter sets - as illustrated by my example parameter sets (m=13,

n=4750, t=90) and (m=13, n=7900, t=44).

>> Well, applying well established formulas to assess the cost of an attack

>> for specific parameter sets (as you did using Christiane's script)

>> sounds good enough to me; I hope you are not suggesting that the cost

>> analysis itself is flawed.

>

> No, I'm not suggesting that the analysis is flawed. I'm suggesting that the

> analysis doesn't support the claimed security category for this parameter

> set.

earlier mails for the Category-3 parameter sets seems to support the

claimed security category. Could you be more specific on why you

consider this not to be the case?

I'd suggest to take this offline so we don't continue to 'spam' the

mailing list; this discussion might be off-topic for many readers. Maybe

we can just report back with the results of our offline discussion

afterwards.

Ruben

Jun 23, 2021, 7:30:36 PM6/23/21

to pqc-co...@nist.gov, pqc-...@list.nist.gov

> "Security level 2^b thus uses key size (c_0+o(1))b^2 (lg b)^2

> where c0 = R/(1−R)(lg(1−R))^2 ."

> If we ignore log factors, when R is fixed the size of the public key

> is asymptotically quadratic in the security parameter.

The argument here, together with the context, is
> where c0 = R/(1−R)(lg(1−R))^2 ."

> If we ignore log factors, when R is fixed the size of the public key

> is asymptotically quadratic in the security parameter.

(1) claiming that the 1MB parameters are cutting things close for 256,

(2) claiming that o(1) can be treated as 0,

(3) claiming that log factors can be ignored, giving a simple b^2,

(4) observing that sqrt(1/2) is 6% below sqrt(192/256), and

(5) concluding that the 1/2 MB parameters are 6% off for 192.

But if one _doesn't_ ignore log factors then the b^2 (lg b)^2 ratio

between b=256 and b=192 is 1.98, which turns the 6% into under 1%. The

argument never claims that #1 is cutting things _that_ close.

Bigger picture: #3 is pointless oversimplification; the submission

already specifically disclaims #2; #1 is ill-defined, and, for people

who understand what the literature says about the attacks, it's hard to

imagine a realistic cost metric that would make #1 correct. In general,

basing such a minor quantitative complaint on such a large pile of

assumptions can simply be dismissed as too error-prone to consider.

The submission team officially requested last year that NIST "fully

define the cost metric to be used for 'categories'" so that all

submission teams can evaluate costs in this metric. NIST's failure to do

this---despite the transparency rule in the call for proposals---is the

primary reason that we all keep wasting time on arguments founded upon

ill-defined claims. This is not how NISTPQC should work.

---Dan (speaking for myself)

Jun 24, 2021, 11:06:33 AM6/24/21

to pqc-...@list.nist.gov

I do not believe that Kirk attempted to
make the argument that you ascribe to him at all. I believe that
he was very clear that he was not using what he described as a
"loose rule of thumb" as the basis for questioning the security
level of mceliece-4608-096.

Kirk provided a table of security
estimates and suggested that the estimate for one of the parameter
sets seemed to be lower than required for the claimed security
level. I have not seen any response to these emails expressing
disagreement with the security estimates from the table or
attempting to explain why mceliece-4608-096 should be considered
category 3 despite the security estimate of 2^187.0 bit ops.

David (speaking only from myself)

----------------------------------------------------------------------

On 6/21/21 7:16 PM, Kirk Fleming
wrote:

For completeness, a limited search using the isdfq code

by Peters (github.com/christianepeters/isdfq) gave:Parameter Set Bit Ops

mceliece-3488-064 2^145.2

mceliece-4608-096 2^187.0

mceliece-6688-128 2^261.9

mceliece-6960-119 2^262.3

mceliece-8192-128 2^297.1

On 6/22/21 8:18 PM, Kirk Fleming
wrote:

I still believe mceliece-4608-096 is less secure than itneeds to be for Category 3.

----------------------------------------------------------------------

Jun 24, 2021, 12:55:40 PM6/24/21

to David A. Cooper, pqc-...@list.nist.gov

On 6/24/21 5:06 PM, 'David A. Cooper' via pqc-forum wrote:

> Kirk provided a table of security estimates and suggested that the

> estimate for one of the parameter sets seemed to be lower than required

> for the claimed security level. I have not seen any response to these

> emails expressing disagreement with the security estimates from the

> table or attempting to explain why mceliece-4608-096 should be

> considered category 3 despite the security estimate of 2^187.0 bit ops.

Let me quote from page 46 of the Classic McEliece submission document:
> Kirk provided a table of security estimates and suggested that the

> estimate for one of the parameter sets seemed to be lower than required

> for the claimed security level. I have not seen any response to these

> emails expressing disagreement with the security estimates from the

> table or attempting to explain why mceliece-4608-096 should be

> considered category 3 despite the security estimate of 2^187.0 bit ops.

"A closer look shows that the attack in [Bernstein, Lange and Peters.

2008] is bottlenecked by random access to a huge array (much larger than

the public key being attacked), and that subsequent ISD variants use

even more memory."

Hence, I am feeling very comfortable with the quoted cost of 2^187 bit
even more memory."

operations for Category 3; the script that Kirk is using counts only bit

operations as well and does not take memory cost into consideration.

Let me refer you again to the paper "A Finite Regime Analysis of

Information Set Decoding Algorithms" by Baldi, Barenghi, Chiaraluce,

Pelosi, and Santini, which covers several ISD attack variants and also
reports memory cost. In that paper, the interesting algorithms for the

Category 3 parameter set are the MMT and BJMM variants of ISD, which

both come with significant memory cost.

As Dan mentioned, an official definition or general agreement on how to

include the memory cost into the NIST security categories is still open.

I'd argue that, including memory cost, all security categories are well

covered by the Classic McEliece submission.

Nevertheless, being a code-based scheme, Classic McEliece allows to

choose parameter sets quite flexibly - so if eventually there was a

definition or general agreement of how to handle memory cost to which

the currently proposed parameters did not fit, then suitable parameter

sets could be chosen quite easily.

Ruben

(speaking for myself)

Jun 24, 2021, 7:15:54 PM6/24/21

to pqc-...@list.nist.gov

Some procedural objections.

'David A. Cooper' (NIST) via pqc-forum writes:

> I have not seen any response to these emails expressing

> disagreement with the security estimates from the table or attempting to

> explain why mceliece-4608-096 should be considered category 3 despite the

> security estimate of 2^187.0 bit ops.

In fact, the Classic McEliece team filed an OFFICIAL COMMENT dated 19

Nov 2020 13:37:46 +0100 responding in detail to the first iteration of

this claim regarding bit operations. Procedurally, it's not reasonable

to have every subsequent minor variation of the argument forcing the

team to prepare a new response. The big problem pointed out in the

original response hasn't gone away.

If NIST didn't understand something in the team response, then NIST

should have promptly asked for clarification. I understand that you're

speaking for yourself, but doesn't NIST have a central issue tracker

making it easy for NIST people to see what has been addressed already?

> I do not believe that Kirk attempted to make the argument that you

> ascribe to him at all.

Let's look at the facts. He

* quoted the "(c_0+o(1))b^2 (lg b)^2" asymptotic;

* explicitly ignored "log factors"---that's #3 in my list---and

* ignored the o(1)---that's #2 in my list.

He used this to justify previously claiming a quadratic "rule of thumb".

He used this "rule of thumb", in turn, to say

* "you should only expect a 2^19 byte public key to be equivalent to

AES-181"---that's #4 and #5 in my list.

Here 181 arises as sqrt(1/2)*256. The logic here has a gaping hole when

the 1MB key costs _much more_ to break than AES-256, so he instead

* claimed "a 2^20 byte public key to be equivalent to

AES-256"---that's #1 in my list.

The starting point before this is actually a claim about 128, which is

equivalent to the claim about 256 for anyone who believes the quadratic

from #2 and #3; sqrt(2)*128 is the same 181.

So his argument for "you should only expect a 2^19 byte public key to be

equivalent to AES-181" is the argument that I attributed to him. Please

withdraw your statement to the contrary.

> I believe that he was very clear that he was not using what he

> described as a "loose rule of thumb" as the basis for questioning the

> security level of mceliece-4608-096.

Readers will understand his original 194.36 argument (from a script

simplifying costs in one way), and his latest 187.0 argument (from a

script simplifying costs in another way), to be claiming that 460896 is

slightly less secure than AES-192. The big problem with this claim was

already covered in the team message from last year.

Readers will _also_ understand his new argument that "you should only

expect a 2^19 byte public key to be equivalent to AES-181" as claiming

that 460896 is slightly less secure than AES-192. Within this argument,

#1 starts from the same big problem as the other argument, but #2 and #3

aren't shared by the other argument.

I quoted #3, correctly summarized the whole argument, and then pointed

out how the inaccuracy of #3 destroys the argument (while also noting

the problems with #1 and #2). I expected to see the argument withdrawn.

I didn't expect to have to contend with claims that the argument won't

affect readers and claims that the argument wasn't even presented.

---Dan

'David A. Cooper' (NIST) via pqc-forum writes:

> I have not seen any response to these emails expressing

> disagreement with the security estimates from the table or attempting to

> explain why mceliece-4608-096 should be considered category 3 despite the

> security estimate of 2^187.0 bit ops.

Nov 2020 13:37:46 +0100 responding in detail to the first iteration of

this claim regarding bit operations. Procedurally, it's not reasonable

to have every subsequent minor variation of the argument forcing the

team to prepare a new response. The big problem pointed out in the

original response hasn't gone away.

If NIST didn't understand something in the team response, then NIST

should have promptly asked for clarification. I understand that you're

speaking for yourself, but doesn't NIST have a central issue tracker

making it easy for NIST people to see what has been addressed already?

> I do not believe that Kirk attempted to make the argument that you

> ascribe to him at all.

* quoted the "(c_0+o(1))b^2 (lg b)^2" asymptotic;

* explicitly ignored "log factors"---that's #3 in my list---and

* ignored the o(1)---that's #2 in my list.

He used this to justify previously claiming a quadratic "rule of thumb".

He used this "rule of thumb", in turn, to say

* "you should only expect a 2^19 byte public key to be equivalent to

AES-181"---that's #4 and #5 in my list.

Here 181 arises as sqrt(1/2)*256. The logic here has a gaping hole when

the 1MB key costs _much more_ to break than AES-256, so he instead

* claimed "a 2^20 byte public key to be equivalent to

AES-256"---that's #1 in my list.

The starting point before this is actually a claim about 128, which is

equivalent to the claim about 256 for anyone who believes the quadratic

from #2 and #3; sqrt(2)*128 is the same 181.

So his argument for "you should only expect a 2^19 byte public key to be

equivalent to AES-181" is the argument that I attributed to him. Please

withdraw your statement to the contrary.

> I believe that he was very clear that he was not using what he

> described as a "loose rule of thumb" as the basis for questioning the

> security level of mceliece-4608-096.

simplifying costs in one way), and his latest 187.0 argument (from a

script simplifying costs in another way), to be claiming that 460896 is

slightly less secure than AES-192. The big problem with this claim was

already covered in the team message from last year.

Readers will _also_ understand his new argument that "you should only

expect a 2^19 byte public key to be equivalent to AES-181" as claiming

that 460896 is slightly less secure than AES-192. Within this argument,

#1 starts from the same big problem as the other argument, but #2 and #3

aren't shared by the other argument.

I quoted #3, correctly summarized the whole argument, and then pointed

out how the inaccuracy of #3 destroys the argument (while also noting

the problems with #1 and #2). I expected to see the argument withdrawn.

I didn't expect to have to contend with claims that the argument won't

affect readers and claims that the argument wasn't even presented.

---Dan

Jun 25, 2021, 8:37:49 PM6/25/21

to Ruben Niederhagen, pqc-forum, pqc-comments

Ruben wrote:

> I'd suggest to take this offline so we don't continue

> to 'spam' the mailing list; this discussion might be

> off-topic for many readers.

> I'd suggest to take this offline so we don't continue

> to 'spam' the mailing list; this discussion might be

> off-topic for many readers.

I was planning to agree but I now find myself being asked

to withdraw claims that I don't believe I made based on

objections that distort what I said.

to withdraw claims that I don't believe I made based on

objections that distort what I said.

Let's recap.

The Classic McEliece update included security estimates for

two challenge problems but did not extend the analysis to

the parameter sets proposed in the submission. I posted

those estimates as they illustrate Ruben's 4-bit gap in

the mceliece-6960-119 claim from the submission.

two challenge problems but did not extend the analysis to

the parameter sets proposed in the submission. I posted

those estimates as they illustrate Ruben's 4-bit gap in

the mceliece-6960-119 claim from the submission.

Ruben replied he was happy the estimates fit the claimed

security categories. I disagreed. I don't believe the

Category 3 claim for mceliece-4608-094 is supported by

the estimates.

security categories. I disagreed. I don't believe the

Category 3 claim for mceliece-4608-094 is supported by

the estimates.

So far so good.

I then made the observation that the parameter selection

criteria would not necessarily lead to a parameter set

meeting to Category 3. The rough heuristic I used was

that the public keys are quadratic in the size of the

security parameter. I thought I made it clear this wasn't

a rigorous argument. I certainly didn't use it to make

any claims about the security of specific parameter sets.

Heuristics like this can be useful guides for designers

when choosing parameters but they are no substitute for

proper analysis.

criteria would not necessarily lead to a parameter set

meeting to Category 3. The rough heuristic I used was

that the public keys are quadratic in the size of the

security parameter. I thought I made it clear this wasn't

a rigorous argument. I certainly didn't use it to make

any claims about the security of specific parameter sets.

Heuristics like this can be useful guides for designers

when choosing parameters but they are no substitute for

proper analysis.

Am I being asked to withdraw the heuristic? I don't think

it should be controversial. When challenged by Ruben I

quoted a formula from the Classic McEliece submission as

that seemed particularly apt. I could equally have quoted

Philippe's rank metric talk from the NIST 2015 workshop.

None of the objections about ignoring o(1) or log terms

are relevant. I have never claimed that asymptotic results

give meaningful security estimates for finite parameters.

it should be controversial. When challenged by Ruben I

quoted a formula from the Classic McEliece submission as

that seemed particularly apt. I could equally have quoted

Philippe's rank metric talk from the NIST 2015 workshop.

None of the objections about ignoring o(1) or log terms

are relevant. I have never claimed that asymptotic results

give meaningful security estimates for finite parameters.

The wider objection is that the 187.0 estimate given for

mceliece-4608-094 doesn't include memory costs. This is

true but the Classic McEliece team won't commit to a

metric that does include memory costs. Fortunately, the

NTRUPrime team do and it's now being used by some of

the other submissions. I leave it as an exercise for the

reader to work out if mceliece-4608-094 meets Category 3

in that metric.

mceliece-4608-094 doesn't include memory costs. This is

true but the Classic McEliece team won't commit to a

metric that does include memory costs. Fortunately, the

NTRUPrime team do and it's now being used by some of

the other submissions. I leave it as an exercise for the

reader to work out if mceliece-4608-094 meets Category 3

in that metric.

Kirk

Jun 26, 2021, 11:12:15 AM6/26/21

to Kirk Fleming, pqc-forum, pqc-comments

I wrote:

> I don't believe the Category 3 claim for mceliece-4608-094

> is supported by the estimates.

> The wider objection is that the 187.0 estimate given for

> mceliece-4608-094 doesn't include memory costs.

> mceliece-4608-094 doesn't include memory costs.

> I leave it as an exercise for the reader to work out if

> mceliece-4608-094 meets Category 3 in that metric.

Those should all have been mceliece-4608-096.

Kirk

--

You received this message because you are subscribed to the Google Groups "pqc-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/trinity-c36a6550-4677-4b2b-b316-26eab54c11a5-1624667864960%403c-app-mailcom-lxa09.

You received this message because you are subscribed to the Google Groups "pqc-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/trinity-c36a6550-4677-4b2b-b316-26eab54c11a5-1624667864960%403c-app-mailcom-lxa09.

Jun 26, 2021, 10:08:09 PM6/26/21

to Kirk Fleming, pqc-forum

>> I'd suggest to take this offline so we don't continue

>> to 'spam' the mailing list; this discussion might be

>> off-topic for many readers.

>

>I was planning to agree but I now find myself being asked

>to withdraw claims that I don't believe I made based on

>objections that distort what I said.

__ __

Evaluation of the security claims of one of the PQC Round 3 finalists can hardly be off-topic for the PQC-forum readers.

__ __

Thus, I would prefer that this discussion stays open – aka, on the list rather than turning “private”.

__ __

__ __

Jun 29, 2021, 9:22:17 AM6/29/21

to pqc-forum, pqc-co...@nist.gov

Statement by the Classic McEliece team:

The Classic McEliece submission document states on page 46:

"[Bernstein, Lange, and Peters. 2008] reported that its attack uses

cost is for a similar parameter set with the same length, dimension, and

degree - but for adding 121 errors during encryption and not 119 errors

as in our case. We correct the corresponding sentence to:

"[Bernstein, Lange, and Peters. 2008] reported that its attack uses

2^266.94 bit operations to break a related (13,6960,119) parameter

set with 121 errors."

The Classic McEliece submission document states on page 46:

"[Bernstein, Lange, and Peters. 2008] reported that its attack uses

2^266.94 bit operations to break the (13,6960,119) parameter set."

However, Kirk Fleming brought to our attention that this quoted attack
cost is for a similar parameter set with the same length, dimension, and

degree - but for adding 121 errors during encryption and not 119 errors

as in our case. We correct the corresponding sentence to:

"[Bernstein, Lange, and Peters. 2008] reported that its attack uses

2^266.94 bit operations to break a related (13,6960,119) parameter

set with 121 errors."

Aug 13, 2021, 10:36:44 AM8/13/21

to pqc-forum, D. J. Bernstein, pqc-...@list.nist.gov, pqc-co...@nist.gov

Dear Dan and others,

I know that you say it is a thankless job, but I am considering being Caroline the cryptoanalyst. I know quite a bit about the ISD algorithms, but I am a complete novice to different cost metrics, so I hope you can help by giving some direction here.

1) I did a Google search on Ray Perlner's slides, but did not find anything. Can you be more specific in your reference? Or give a link? Regardless of whether I choose to be Caroline the Cryptoanalyst, the slides sound mighty interesting.

2) Are there other resources on different cost metrics, which might be a good place to start?

3) Where do I find analyses of AES in different cost metrics? It only makes sense to do an analysis of McEliece in a specific metric if there exists an analysis of AES in that metric, so I need to know which metrics AES has been analysed in (and what the result was...)

4) You say that the attack is "bottlenecked by random
access to a huge array". When analysing the complexity of the algorithm, is this different from just "calls to memory"? In other words, why can we not "just" count the number of operations and the number of accesses to memory? What makes this type of call to memory special?

I hope to hear from some of you, who have more experience with different cost metrics.

Best regards,

Freja Elbro

Aug 13, 2021, 11:16:15 AM8/13/21

to Freja Elbro, pqc-forum, D. J. Bernstein, pqc-...@list.nist.gov, pqc-co...@nist.gov

Hi Freja,

__ __

On the subject of RAM, the cost of a memory read or write depends on size of the memory being accessed. The signal indicating what to read, and the signal bringing the data back, must travel to and from the location where the data is stored, and this takes time and dissipates energy along the way. If memory were organized as a 3-dimensional structure, then this would take O(cbrt(n)) time and energy. But a large 3d structure of active electronics is difficult to power and cool, so a 2d model and a sqrt(n) cost model is probably more accurate, or perhaps somewhere in between. This holds even on the scale of individual CPUs: it’s why they have multi-level caches. The exact costs are debatable, but the fact that they increase as some power of the memory being accesses is important for even rough estimation of real-world attack costs.

__ __

Note that if the attack is run on many parallel machines, and each machine accesses only its own RAM, then this is much cheaper even if the total amount of memory is the same, because they don’t have to communicate as far. That is, each lookup is into a smaller, local RAM, so it’s cheaper.

__ __

Taking a naïve attack and multiplying its cost by sqrt(memory) isn’t optimal, of course. The cost can be balanced for a meet-in-the-middle attack, and thus for ISD: you need to trade more computation for fewer memory accesses into smaller memories. See eg the famous van Oorschot-Wiener paper [1]. This makes the attack significantly more expensive than it would be if memory accesses cost O(1).

__ __

Brute-force attacks on AES don’t use very much memory, so they cost about the same in this model.

__ __

Cheers,

- Mike

__ __

[1] Parallel Collision Search with Cryptanalytic Applications. Paul van Oorschot and Michael Wiener, Journal of Cryptology, 1999. https://link.springer.com/content/pdf/10.1007/PL00003816.pdf

__ __

**From: **Freja Elbro**Sent: **Friday, August 13, 2021 2:36 PM**To: **pqc-forum**Cc: **D. J. Bernstein; pqc-...@list.nist.gov; pqc-co...@nist.gov; pqc-co...@nist.gov**Subject: **Re: [pqc-forum] ROUND 3 OFFICIAL COMMENT: Classic McEliece

__ __

Dear Dan and others,

Aug 13, 2021, 1:37:27 PM8/13/21

to Michael Hamburg, Freja Elbro, pqc-forum, D. J. Bernstein, pqc-forum, pqc-comments

Hi Freja and Mike

__ __

Just briefly adding. I think the slides of mine Dan is referring to are the ones linked in my August 17 email here:
https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/GR3plWpSV4U/m/SChfTyNvBAAJ

__ __

Best,

Ray

__ __

__ __

**From:** Michael Hamburg <mi...@shiftleft.org>

**Sent:** Friday, August 13, 2021 11:16 AM

**To:** Freja Elbro <davd...@gmail.com>; pqc-forum <pqc-...@list.nist.gov>

**Cc:** D. J. Bernstein <d...@cr.yp.to>; pqc-forum <pqc-...@list.nist.gov>; pqc-comments <pqc-co...@nist.gov>

**Subject:** RE: [pqc-forum] ROUND 3 OFFICIAL COMMENT: Classic McEliece

__ __

Hi Freja,

__ __

On the subject of RAM, the cost of a memory read or write depends on size of the memory being accessed. The signal indicating what to read, and the signal bringing the data back, must travel to and from the location where the data is stored,
and this takes time and dissipates energy along the way. If memory were organized as a 3-dimensional structure, then this would take O(cbrt(n)) time and energy. But a large 3d structure of active electronics is difficult to power and cool, so a 2d model
and a sqrt(n) cost model is probably more accurate, or perhaps somewhere in between. This holds even on the scale of individual CPUs: it’s why they have multi-level caches. The exact costs are debatable, but the fact that they increase as some power of the
memory being accesses is important for even rough estimation of real-world attack costs.

__ __

Note that if the attack is run on many parallel machines, and each machine accesses only its own RAM, then this is much cheaper even if the total amount of memory is the same, because they don’t have to communicate as far. That is, each
lookup is into a smaller, local RAM, so it’s cheaper.

__ __

Taking a naïve attack and multiplying its cost by sqrt(memory) isn’t optimal, of course. The cost can be balanced for a meet-in-the-middle attack, and thus for ISD: you need to trade more computation for fewer memory accesses into smaller
memories. See eg the famous van Oorschot-Wiener paper [1]. This makes the attack significantly more expensive than it would be if memory accesses cost O(1).

__ __

Brute-force attacks on AES don’t use very much memory, so they cost about the same in this model.

__ __

Cheers,

- Mike

__ __

[1] Parallel Collision Search with Cryptanalytic Applications. Paul van Oorschot and Michael Wiener, Journal of Cryptology, 1999.
https://link.springer.com/content/pdf/10.1007/PL00003816.pdf

__ __

**From: **Freja Elbro

**Sent: **Friday, August 13, 2021 2:36 PM

**To: **pqc-forum

**Cc: **D. J. Bernstein;
pqc-...@list.nist.gov; pqc-co...@nist.gov;
pqc-co...@nist.gov

**Subject: **Re: [pqc-forum] ROUND 3 OFFICIAL COMMENT: Classic McEliece

__ __

Dear Dan and others,

Aug 17, 2021, 6:36:29 AM8/17/21

to pqc-...@list.nist.gov, pqc-co...@nist.gov

Freja Elbro writes:

> 2) Are there other resources on different cost metrics, which might be

> a good place to start?

A great starting point is Knuth's Art of Computer Programming, which (1)
> 2) Are there other resources on different cost metrics, which might be

> a good place to start?

defines clear metrics for the cost of computation, with (2) some efforts

at realism, and then (3) precisely and (4) accurately analyzes the cost

in these metrics of (5) many important algorithms. For _some_ algorithms

Knuth also (6) compares different metrics, including (7) metrics that

account for communication cost (see, e.g., the analysis of external

sorting) and (8) parallel metrics (see, e.g., the analysis of

sorting-network depth and the systolic array for multiplying integers).

The literature on computational complexity theory has more of #6,

usually backed by #1 and #4, sometimes with #7, sometimes with #8. There

are many pathways into this; the best starting point I've seen is

https://eprints.illc.uva.nl/id/eprint/1008/1/CT-1989-02.text.pdf (well,

I've read the printed version, but I assume the preprint is similar).

However, this side of the literature generally doesn't bother with #5,

and deliberately sacrifices #3 in favor of simplicity, which also

compromises #2 and puts serious limits on the value of #7 and #8.

https://maths-people.anu.edu.au/~brent/pd/rpb055.pdf is a typical

example of a cloud of papers that aim to do better at #2 under the

constraints of #1, #7, and #8. There's a split here between papers

aiming for #3 for optimizing simple algorithms (see, e.g., the AES

attack papers in https://2012.sharcs.org/accepted.html and various other

papers under https://www.hyperelliptic.org/tanja/SHARCS/) and papers

aiming to optimize more complicated algorithms at the expense of #3

(see, e.g., https://cr.yp.to/papers.html#batchnfs).

There are many papers reporting speeds of computations on existing CPUs

and networks. This includes, e.g., papers reporting speed records for

various cryptanalytic computations, scoring well on #5 and usually #3,

sometimes with #7 and/or #8. The idea of using existing machines might

sound great for #2 and #4, but for security analysis someone has to

extrapolate to large-scale attacks. If data-access time isn't obviously

an issue in academic speed records, then people will usually fall into

the trap of assuming that it won't be an issue in much larger attacks,

simply ignoring the research that has gone into analyzing #1+#2+#7+#8.

---Dan

Aug 31, 2021, 11:22:52 AM8/31/21

to pqc-forum, D. J. Bernstein, pqc-co...@nist.gov, pqc-...@list.nist.gov

Dear NIST and others,

(and thank you so much, Mike, Ray and Dan for comprehensive answers!)

Sabine Pircher and I would like to do the security analysis of McEliece that NIST asked for at the PQCrypto2021: Count the concrete number of bit operations needed to perform an ISD attack. However, as Dan and others point out, it is not clear in which metric to do the analysis. Therefore, before we put in the work, we would like some feedback on whether it will be appreciated.

Several
authors have already done the count that NIST asks for:

- Christiane Peters code: https://github.com/christianepeters/isdfq Uses the algorithm presented in https://eprint.iacr.org/2009/589 to estimate the security of McEliece. I am not sure how that algorithm compares to BJMM below, but expect it to be slower. Note that the script does not automatically optimize parameters, so to use the script, you have to manually search.
- "A finite regime analysis of Information Set Decoding algorithms", Algorithms 12, no. 10 (2019). The paper can be found at https://www.mdpi.com/1999-4893/12/10/209/pdf. Provides security estimates for McEliece based on seven ISD-algorithms, but does not provide code to check their numbers. They also restrict BJMM to three levels, where two levels would be better in this case. See answer By Elena below.
- Answer by Elena Kirshanova https://crypto.stackexchange.com/questions/92074/number-of-bit-operations-required-for-information-set-decoding-attacks-on-code-b She estimates security of McEliece based on MMT and BJMM and provides source code, but she excludes polynomial factors.

What is missing? All the work above assumes that lookup takes constant time.

What we can do: Optimize BJMM parameters in the different models

- Random access to memory costs square root of memory size
- Random access to memory costs cubic root of memory size

To find the security level of McEliece in those two metrics.

What do you think? Would this be helpful to the discussion?

(Thank you Dan by the way for your very comprehensive answer! We read some of the material, but had to stop at some level, as was a bit deeper than we were hoping to go. We unfortunately do not have the time to understand models for how computers work, but have to rely on superficial models like the two above to be able to estimate running time.)

Sep 1, 2021, 2:51:01 PM9/1/21

to Freja Elbro, pqc-forum, D. J. Bernstein, pqc-comments, pqc-forum

Dear Freja,

From our (NIST) perspective, better estimates for the cost of BJMM would be very helpful. You mention models where random access memory costs are proportional to the square root
and cube root of memory size, and these would be helpful to have, but we would also like some confirmation that we have accurate estimates of the cost of BJMM, when the cost of RAM queries is constant or logarithmic in the size of the memory. As you note,
Elena's estimates indicate that BJMM depth 2 is best when determining cost in the RAM model. However, depth 3 seems likely to be better if the cost of memory access is taken into account, unless the number of random accesses to the memory is very small relative
to the number of bit operations performed.

My guess is that a cube root memory cost is going to be the closest to the real world cost of an attack, but only because departures from reality in both directions will roughly
cancel. It’s also possible that the real cost of memory will be less than suggested by models that don’t distinguish between true random access queries, and memory queries that can be made local using strategies similar to
Van-Oorschot Weiner, and it might be helpful to investigate that possibility when looking at models that assume queries to large memories are significantly more expensive than queries to small
memories.

In any event, thank you for volunteering to help us in our evaluations.

Ray Perlner

__ __

__ __

**From:** Freja Elbro <davd...@gmail.com>

**Sent:** Tuesday, August 31, 2021 11:23 AM

**To:** pqc-forum <pqc-...@list.nist.gov>

**Cc:** D. J. Bernstein <d...@cr.yp.to>; pqc-comments <pqc-co...@nist.gov>; pqc-comments <pqc-co...@nist.gov>; pqc-forum <pqc-...@list.nist.gov>

**Subject:** Re: [pqc-forum] ROUND 3 OFFICIAL COMMENT: Classic McEliece

__ __

Dear NIST and others,

Reply all

Reply to author

Forward

0 new messages