Greetings,
I have read some of the documents published by NIST and done some other research, in the hopes of understanding what special considerations engineers may need to understand while implementing systems which use the algorithms that were recently announced for standardization.
One item which confused me was the concept of decryption failures, for example under Kyber. I understand that for some mathematical reasons, for certain very rare values, the generated encryption key does not survive the round-trip of Encaps(Decaps(key)). However, in the Kyber paper (
https://ieeexplore.ieee.org/document/8406610/ Algorithm 5), I see that in this case, some different key is returned based on a random and otherwise unused value "z" unique to the secret key.
Is this strictly done for constant-time reasons, or does this actually return some usable key? If so, how does the other party get the key H(z,H(C)) given that "z" seems to be part of the secret key?
If not, is it simply the implementer's responsibility to determine that the decrypted data is nonsense, and to restart the key exchange process?
(I realize that it may be premature to ask for implementation guidance when standards don't even exist yet, but I have engineers asking me what they may need to plan for - larger ciphertext sizes, for instance - and I'd like to start getting a clearer understanding of these algorithms.)
Thank you,
Dan Collins