NIST's criteria for attack-cost metrics

762 views
Skip to first unread message

D. J. Bernstein

unread,
Oct 30, 2023, 8:09:21 PM10/30/23
to pqc-...@list.nist.gov
Shortly before round 3 began, NIST said that it could consider dropping
its "classical gate count" requirement in favor of another cost metric,
but that any such metric "must at minimum" meet the four criteria quoted
below regarding measurability, optimization, realism, and avoiding
overestimates. NIST said this seems to be a "fairly tall order".

NIST appears to have dropped the requirement of Kyber-512 being as hard
to break as AES-128 in "classical gate count", and is instead saying
that "in realistic models of computation" the cost of breaking Kyber-512
is, as far as NIST knows, higher than the cost of breaking AES-128. I'll
write "the NIST metrics" below to mean the metrics that NIST is using
for this evaluation of the Kyber-512 security level.

I'm trying to understand how the NIST metrics stack up against NIST's
stated minimum criteria regarding measurability, optimization, realism,
and avoiding overestimates. Specific yes/no questions for NIST appear
below; for any "yes" answers, I'd appreciate a precise citation.

> 1) The value of the proposed metric can be accurately measured (or at
> least lower bounded) for all known attacks (accurately mere means at
> least as accurately as for gate count.)

Is there a definition of the NIST metrics?

Is there documentation giving evidence of measurability of "all known
attacks" in these metrics?

Is there documentation giving evidence that the resulting measurements
are at least as accurate as "gate" counts?

> 2) We can be reasonably confident that all known attacks have been
> optimized with respect to the proposed metric. (at least as confident
> as we currently are for gate count.)

Is there documentation of optimization of "all known attacks" with
respect to the NIST metrics?

If not, is there at least such documentation for the specific attacks
that NIST is using to evaluate the Kyber-512 security level?

Is there documentation giving evidence that this optimization is at
least as thorough as optimization for "gate" counts?

> 3) The proposed metric will more accurately reflect the real-world
> feasibility of implementing attacks with future technology than gate
> count -- in particular, in cases where gate count underestimates the
> real-world difficulty of an attack relative to the attacks on AES or
> SHA3 that define the security strength categories.

Is there documentation giving evidence that the NIST metrics are more
accurate than "gate" count in reflecting real-world feasibility?

> 4) The proposed metric will not replace these underestimates with
> overestimates.

Is there documentation giving evidence that the NIST metrics aren't
replacing underestimates with overestimates?

If I missed an announcement saying that for some reason NIST has dropped
these criteria, then I'd appreciate a link to that announcement---but I
think measurability, optimization, realism, and avoiding overestimates
are important issues to consider in any case, so I'm still asking the
questions above in the interests of transparency regarding NIST's
security evaluations. Thanks in advance to NIST for its answers.

---D. J. Bernstein

P.S. For completeness, the quotes are from NIST's email dated 23 Jun
2020 21:25:49 +0000, and what NIST wrote about "minimum" etc. was that
"we currently believe classical gate count to be a metric that is
'potentially relevant to practical security,' but are open to the
possibility that someone might propose an alternate metric that might
supersede gate count and thereby render it irrelevant. In order for this
to happen, however, whoever is proposing such a metric must at minimum
convince NIST that the metric meets the following criteria: ..."
signature.asc

dustin...@nist.gov

unread,
Nov 2, 2023, 12:51:10 PM11/2/23
to pqc-forum, D. J. Bernstein, pqc-...@list.nist.gov
The June 23, 2020, email from which Dan Bernstein quoted was part of a thread (https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/o2roJXAlsUk/m/gea0RX5NBgAJ) discussing the process for making decisions about the security levels of schemes. We encourage readers to refer back the the original thread for complete context. When the original emails are read in context, we believe that the questions below are not being asked in good faith, and so we will not be responding to them.

D. J. Bernstein

unread,
Nov 2, 2023, 2:24:59 PM11/2/23
to pqc-...@list.nist.gov
NIST writes:
> When the original emails are read in context, we believe
> that the questions below are not being asked in good faith, and so we
> will not be responding to them.

Wow.

These basic issues of measurability, optimization, realism, and avoiding
overestimates were so important in 2020 that NIST announced them as
criteria that new metrics "must at minimum" satisfy, but by 2023 they're
so taboo that polite yes/no questions about the status are met with NIST
issuing a personal attack and refusing to answer the questions?

And this behavior by NIST is supposed to be justified because of, um,
some unspecified part of the thread back in 2020? The relevant part will
be clear once you see it, but NIST is unable to quote it? Seriously?

---D. J. Bernstein
signature.asc
Reply all
Reply to author
Forward
0 new messages