Comments on FIPS 203

Skip to first unread message

Filippo Valsorda

Nov 22, 2023, 7:24:22 PM11/22/23
Hi all,

I am forwarding the comments I submitted to NIST for the FIPS 203 draft.

I am a maintainer of the cryptography libraries distributed with the Go language, and these are my personal comments, but they are informed by and oriented towards implementing ML-KEM for the Go ecosystem. Here are the key points.
  • The specification is welcome and well-written.
  • I was able to implement the whole scheme based on it without referring to existing implementations.
  • I support the restriction on the output key, and the removal of the randomness hashing step.
  • I suggest rolling back the change to the Fujisaki-Okamoto transform.
  • I support the specification of the whole range of parameters.
  • I suggest that NIST approve the use of a 128-bit RBG with ML-KEM-768 if targeting Security Category 1.
  • Compression and decompression in Section 4.2.1 could use some extra implementation guidance.
  • “r” is reused for the 32-byte K-PKE.Encrypt input and for the vector of polynomials sampled from it. Renaming one of the two would make the global lexical scope fully consistent.
  • The comment on line 6 of Algorithm 14 (K-PKE.Decrypt) is incorrect.


----- Original message -----
From: Filippo Valsorda <>
Subject: Comments on FIPS 203
Date: Thursday, 23 November 2023 01:14

Dear NIST,

I am attaching my comments on FIPS 203.


Comments on FIPS 203.pdf

D. J. Bernstein

Nov 23, 2023, 5:04:14 AM11/23/23
Filippo Valsorda writes:
> in practice it’s hard to imagine a real-world system that can
> survive—as a whole—compromise of its RBG just because its KEM hashes
> the RBG output.

The ecosystem is big. Presumably Dual EC is still deployed.

If the argument here is that the KEM isn't the only way RNG outputs are
leaked: sure, but it's very easy to imagine real-world systems where the
other leaks have been plugged or didn't exist in the first place.

(I'm reminded of how TLS for a long time pointed to unencrypted DNS as
an excuse to not encrypt host names, and vice versa.)

The 203 draft instead makes the narrower claim that hashing is
unnecessary with (current, not Dual EC) "NIST-approved randomness
generation". But NIST-approved randomness generation is a mess of
different options without clearly defined quantitative security claims
and without a full analysis. An extra hashing layer reduces risks.

I commented in an earlier message that
did some analysis of NIST's RNG _modes_ (not the full RNGs) and found
various flaws. Here's an example: Section 7.1 of the paper breaks NIST's
claim that HMAC-DRBG provides forward secrecy. This is a fairly narrow
attack, mattering for applications that want forward deniability, but
it's still an illustration of the importance of security review. Hashing
the RNG output stops the stated attack, and more broadly means that any
break of forward secrecy has to recover full RNG outputs (if the hashing
is strong), not just some fragmentary information about the outputs.

Meanwhile, what I still haven't seen from NIST is a statement of how
NIST claims the hashing is supposed to be a _problem_.

Keeping cryptosystem specifications stable for security review is
important. NIST has noted that changes "after the third round ... may
not receive as much public scrutiny and analysis" and has said that it
wants to "minimize changes introduced". What exactly is the issue with
the hashing that's supposed to outweigh this?

---D. J. Bernstein
Reply all
Reply to author
0 new messages