Hi,
The Cryptographic Module Validation Program (CMVP) is looking for input on what it should define as Cryptographic Algorithm Self Tests (CAST) for modules implementing FIPS 203,204 and 205.
In short, they would like to update their Implementation Guidance (IG) 10.3.A that defines CAST requirements for modules targeting FIPS 140-3 validation. They’d like to get a draft IG ready for public review by the end of 2023.
For anyone not familiar with FIPS 140-3, CAST are typically known answer tests that are run ahead of the first use of an algorithm implementation. This is typically triggers after power-on to provide some level of assurance that the implementation of the target algorithm is operating as expected. For some modules, these are separately run periodically to confirm the continued operation of the module. In some cases historically, the use of back-to-back operations (e.g. sign followed by verify) was permitted for certain algorithms such as ECDSA for FIPS 140-2. That option was removed for FIPS 140-3.
In general, CAST traditionally do not exhaustively test all algorithm options available with a given implementation and typically target a sample of options for a given algorithm (e.g. a single key size, a single hash option, a single method of padding messages). CMVP is looking for feedback on what level of testing is appropriate for the new algorithms and what if any constraints need to be imposed during CAST (e.g. should modules support the ability to use fixed seeds during tests in order to ensure the ability to achieve deterministic CAST).
The relevant section of the FIPS 140-3 IG for convenience can be found on the NIST website using this link: Implementation Guidance for FIPS 140-3 (nist.gov) - IG 10.3.A.
The original post looking for support from the CMUF is: Discussions - Projects (onlyoffice.co) (to access this you’ll need to register to the front page for the CMUF).
If anyone on this list is interested in helping NIST CMVP with this task either contribute directly to the CMUF thread (link above) or feel free to pass your interest to me and I can feed back to Alex through the CMUF.
Kind Regards,
Graham.
(Shared on behalf of the CMUF and with permission of CMVP - at this point, I don’t think CMVP has received any significant offers of support.)
------------------------------------------------------------------------------------------------------------------
Original Text from CMVP post on this:
------------------------------------------------------------------------------------------------------------------
Alex Calis 10:24 AM 10/19/2023#
To help the CMVP prepare for the upcoming PQC algorithms, the CMVP is requesting help from the CMUF on developing the self-test requirements for the algorithms specified in the Draft FIPS 203/204/205 documents.
At minimum, this would cover:
The CMVP would like to have a revised Draft IG 10.3.A that incorporates these self-test requirements out for review by the end of the year. Please find the Word version of the current version of IG 10.3.A (track changes should be ON.): https://cmuserforum.onlyoffice.co/Products/Files/DocEditor.aspx?fileid=8659524.
A CMUF WG may be formed as part of this work, but it is not necessary if enough traction is made through feedback and proposed IG updates posted to this thread.
Thanks,
CMVP
Hi Graham,
One aspect that has often been missing in the past are negative test vectors. You should e.g., not be able to pass ML-KEM.Encaps and ML-KEM.Decaps validation if your implementation accepts invalid input. In general, I think all places where an input can be invalid should be tested unless proven that invalid input is harmless. Negative test vectors are often more important for security than positive test vectors. There are unfortunately a large amount of ECC implementations not performing ECC point validation. My current recommendation would therefore be to use Curve25519 as much as possible.
This is what Ericsson wrote in our official comments:
"We strongly think NIST should produce negative test vectors for all algorithms. Negative test vectors are very important for catching bugs that might have security implications. We think all future algorithm and protocols standards should be accompanied with negative test vectors. It is often claimed that security agencies participate in standardization and production of cryptographic modules with the explicit goal of sabotaging security to enhance their surveillance capabilities. Taking a strong stance on finding security threatening implementation bugs would increase the trust in NIST as a global SDO for cryptography. Two functions that require negative test vectors are ML-KEM.Encaps and ML-KEM.Decaps. FIPS validation shall not be achievable without input validation."
https://github.com/emanjon/NIST-comments/blob/main/2023%20-%20FIPS%20203%2C%20204%2C%20and%20205.pdf
Cheers,
John Preuß Mattsson
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
pqc-forum+...@list.nist.gov.
To view this discussion on the web visit
https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/PR0P264MB37555167E4A947768460E21F99BCA%40PR0P264MB3755.FRAP264.PROD.OUTLOOK.COM.
THALES GROUP LIMITED DISTRIBUTION to email recipients
Thank you John. Just to make it clear, the tests in question are the tests performed by the module to check its continued correct operation i.e. looking for failures post-deployment.
The implementations would separately be subject to the full suite of tests made available by the NIST Cryptographic Algorithm Validation Program (CAVP) but where these tests would be run before deployment.