Recommendations for Key-Encapsulation Mechanisms | Draft SP 800-227 is Available for Comment

3,725 views
Skip to first unread message

Moody, Dustin (Fed)

unread,
Jan 7, 2025, 10:15:23 AMJan 7
to pqc-forum
The initial public draft of NIST Special Publication (SP) 800-227, Recommendations for Key-Encapsulation Mechanisms, is now available for public comment.
 
NIST recently published Federal Information Process Standard (FIPS) 203, Module-Lattice-Based Key-Encapsulation Mechanism Standard, to update its cryptographic standards with an algorithm designed to provide protection from quantum attacks. In addition, NIST will select one or two additional quantum-resistant key-encapsulation mechanisms (KEMs) for standardization. To provide guidance on using KEMs, NIST is introducing SP 800-227, Recommendations for Key-Encapsulation Mechanisms. This draft document describes the basic definitions, properties, and applications of KEMs. It also provides recommendations for implementing and using KEMs in a secure manner.
 
The public comment period is open through March 7, 2025. See the publication details for a copy of the draft and instructions for submitting comments.
 
NIST will also hold a virtual Workshop on Guidance for KEMs on February 25-26, 2025, to gather additional feedback on SP 800-227.


Dustin Moody
NIST PQC

dustin...@nist.gov

unread,
Jan 7, 2025, 10:16:34 AMJan 7
to pqc-forum, dustin...@nist.gov

Call for Submissions - NIST Workshop on Guidance for KEMs 

February 25-26, 2025 

(Virtual Event Only) 

Submission Deadline: January 28, 2025 

 

 

NIST recently published FIPS 203, Module-Lattice-Based Key-Encapsulation Mechanism Standard, to update its cryptographic standards with an algorithm designed to provide protection from quantum attacks.  In addition, NIST will select one or two additional quantum-resistant key encapsulation mechanisms (KEMs) for standardization. To provide guidance on using KEMs, NIST has released draft NIST SP 800-227, Recommendations for Key Encapsulation Mechanisms.  Public comments on SP 800-227 are due by March 7, 2025. 

 

To further engage the cryptographic community and gather feedback, NIST will hold a virtual workshop on February 25-26, 2025, focusing on draft SP 800-227. The workshop aims to facilitate discussions on NIST's guidance for KEMs.  

 

NIST invites submissions in the form of discussion papers, surveys, presentations, research, case studies, panel proposals, and participation from all interested parties, including researchers, system architects, implementors, vendors, and users. NIST will post any accepted submissions on the conference website after the conference; however, no formal proceedings will be published.  

 

Topics for submissions should include but are not limited to: 

  • Testing and validations of implementations of KEMs  

  • Impacts to existing applications and protocols (e.g., changes needed to accommodate KEMs 

  • Steps or strategies for organizations to prepare for migrating to KEMs 

  • Input checking for values input to KEM algorithms 

  • Using KEMs for authenticated key establishment 

  • Key combiners for KEMs, or hybrid/composite KEMs 

  • Performance, scalability, or interoperability considerations for KEM deployment 

  • Implementation experiences and lessons learned from KEM adoption 

  • Potential use cases and industry applications of KEMs 

  • Limits for use of ephemeral keys  

 

Additional relevant topics are welcome. 

 

Submissions should be provided electronically, in PDF or PowerPoint format. As there is a short amount of time for this call, NIST is looking for extended abstracts of 1-2 pages describing what would be presented. Alternatively, slides of a proposed presentation could be submittedProposals for panels should include the topics for discussion, as well as possible panelists and an indication of which panelists have confirmed their participation.  

 

Please submit the following information to pqc-ke...@nist.gov: 

  • Name, affiliation, email, phone number (optional), postal address (optional) for the primary submitter  

  • First name, last name, and affiliation of each co-submitter  

  • Extended abstract, presentation, or panel proposal in electronic format as an attachment  

 

All submissions will be acknowledged. General information about the conference, including registration information, will be available at the conference website: http://www.nist.gov/pqcrypto. 

Mike Ounsworth

unread,
Jan 7, 2025, 12:16:13 PMJan 7
to Moody, Dustin (Fed), pqc-forum

Thanks Dustin, this is great to get an IDP to quickly.

 

I may have skimmed too quickly, but does 800-277 give any clarification on whether the seeds passed into ML-KEM.KeyGen_internal may come from a KDF, or are you leaving in place FIPS 203’s requirement that the seed come directly from an approved DRBG?

 

In particular, this impacts the IETF’s X-Wing proposal [1] which has the following KeyGen:

 

def expandDecapsulationKey(sk):

  expanded = SHAKE256(sk, 96)

  (pk_M, sk_M) = ML-KEM-768.KeyGen_internal(expanded[0:32], expanded[32:64])

  sk_X = expanded[64:96]

  pk_X = X25519(sk_X, X25519_BASE)

  return (sk_M, sk_X, pk_M, pk_X)

 

 

To my understanding, expanding a seed with SHAKE256 prior to invoking ML-KEM.KeyGen_internal is currently explicitly FIPS-disallowed, and will remain so under SP 800-227. I would appreciate if NIST could confirm that my understanding is correct.

 

 

[1]: https://datatracker.ietf.org/doc/draft-connolly-cfrg-xwing-kem

---

Mike Ounsworth

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/PH0PR09MB8667B8EBCD77C12889F5CA65E5112%40PH0PR09MB8667.namprd09.prod.outlook.com.

Loganaden Velvindron

unread,
Jan 7, 2025, 12:47:34 PMJan 7
to Moody, Dustin (Fed), pqc-forum
A comment so far (page 17):

Side-channel protection. Cryptographic modules for KEMs should be
designed with ap-682
propriate countermeasures against side-channel attacks. This includes
protecting against683
timing attacks with constant-time implementations and protecting
memory from leakage.

Protecting memory from leakage seems a bit terse. How about zeroing
sensitive key material once
it's no longer needed ?

Cas Cremers

unread,
Jan 8, 2025, 11:00:09 AMJan 8
to pqc-forum, Moody, Dustin (Fed), Dax, Alexander, Niklas Medinger
Dear all,

Unless we missed some subtle detail somewhere, we think that the unilateral key establishment/transport processes (e.g., Figure 4 and 5), when instantiated with the RSA-KEM option, do not guarantee that the derived session key is bound to the intended public key. This can, e.g., enable a dishonest server to cause a client to exchange a key with another, unintended server [See [1], Section 3.1]. In mutually authenticated protocols such behavior typically manifests as a Unknown Key-Share (UKS) attack.

We explain the behavior at the end of this e-mail. The underlying cause is that RSA-KEM does not provide strong binding properties (See [2]), which allows re-encapsulation and the protocols in the draft do not mitigate this at the protocol level. If the key establishment/transport protocols are instantiated instead with DH-KEM or ML-KEM, the protocols provide better key agreement properties.

This can be fixed in two main ways:

Option 1: Fix the key transport/establishment protocols in the standard. One standard way of doing this is to include the involved public encapsulation key(s) in the final key derivation.

Option 2: Require stronger binding properties [2] from the KEMs for the example protocols. This would exclude the use of RSA-KEM.

Best wishes,

Cas & Alex & Niklas

[1] Cyprien Delpech de Saint Guilhem and Marc Fischlin and Bogdan Warinschi, “Authentication in Key-Exchange: Definitions, Relations and Composition”
https://eprint.iacr.org/2019/1203

[2] Cremers, C.; Dax, A.; Medinger N. (2023), "Keeping Up with the KEMs: Stronger Security Notions for KEMs and automated analysis of KEM-based protocols"
https://eprint.iacr.org/2023/1933 (Also at ACM CCS 2023)

----------------------------------------------
Unintended behavior description.
Honest parties: Alice, Bob
Dishonest party: Eve
Example of using the protocol from Figure 4 using the definitions of RSASVE from SP 800-56Br2 (Section 7.2.1 and Section 8.2.2) and RSASVE-KEM from SP 800-227 (Section 6.1.2). We assume static KEM keys.

Eve starts a new session with Bob.
- Bob uses RSASVE-KEM.Encaps with ek_E and obtains (K_B, c_B)
- Bob sends c_B to Eve.
- Eve computes RSASVE.Recover(dk_E, c_B) (see SP 800-56Br2 Section 7.2.1.3) to obtain K_B = Z, the shared secret previously chosen by Bob.

For the party Alice that Eve wants to force the key to be shared with:
- Eve uses RSASVE.Generate(ek_A) but, instead of using the RBG in step 2a (see SP 800-56Br2 Section 7.2.1.2), they re-use the shared secret Z, resulting in (Z, c_A).
- Eve sends c_A to Alice
- Alice uses RSASVE-KEM.Decaps(dk_A, c_A) to obtain Z.
- Alice sends nonce N_a to the adversary and incorporates it in the final shared key K = KDF(Z || N_a).

- Eve learns nonce N_a and forwards it to Bob who also incorporates it in their final shared key K = KDF(Z || N_a).

Thus, Alice and Bob share a key, but Bob thinks it is shared with Eve.

The exact same procedure works for Figure 5, as the key confirmation step can be completed similarly because the attacker knows K and can construct the confirmation data.
----------------------------------------------

D. J. Bernstein

unread,
Jan 9, 2025, 8:02:17 AMJan 9
to pqc-...@list.nist.gov
Cas Cremers writes:
> Option 2: Require stronger binding properties [2] from the KEMs for
> the example protocols. This would exclude the use of RSA-KEM.

This wouldn't exclude the use of RSA-KEM. Consider, e.g., a caller that
uses RSA-KEM by first plugging it into HashPK and then plugging the
resulting composition HashPK(RSA-KEM) into the protocol. Here HashPK is
a simple modular wrapper that replaces the session key k with H(k,p)
where p is the public key and H is a hash function. Reusing H(k,p) for
another public key requires finding collisions in H. Similar comments
apply to binding to other aspects of the context.

---D. J. Bernstein
signature.asc

Cas Cremers

unread,
Jan 9, 2025, 8:09:28 AMJan 9
to pqc-...@list.nist.gov
Hi,

This is an excellent example of Option 1.

Best,

Cas

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

D. J. Bernstein

unread,
Jan 9, 2025, 8:50:30 AMJan 9
to pqc-...@list.nist.gov
Cas Cremers writes:
> This is an excellent example of Option 1.

No, it is not an example of Option 1.

Option 1 was "Fix the key transport/establishment protocols in the
standard". The word "fix" tells the reader that the standard protocols
are problematic and are then changed.

What I described doesn't touch those protocols at all, and doesn't say
that there's a problem with the protocols. It's instead inserting a
modular adapter between those protocols and the KEM to achieve the
desired system property (assuming that this property is desired). The
adapter can be specified in another section or in another document.

The stated list of options tells people that achieving this property
using RSA-KEM requires modifying the protocols: there's either "Option
1" to "fix" the "protocols in the standard", or "Option 2" that would
supposedly "exclude the use of RSA-KEM". But that's not true.

---D. J. Bernstein
signature.asc

Moody, Dustin (Fed)

unread,
Jan 14, 2025, 11:17:42 AMJan 14
to Mike Ounsworth, pqc-forum
Mike,  

This topic came up on the pqc-forum a little bit earlier, see:


You are right that SP 800-227 doesn't mention this topic, and that FIPS 203 doesn't allow for seeds to come from a KDF (instead of an approved DRBG).  We are working on an update of SP 800-133 that would allow for this.  

Dustin




From: Mike Ounsworth
Sent: Tuesday, January 7, 2025 12:15 PM
To: Moody, Dustin (Fed); pqc-forum
Subject: RE: Recommendations for Key-Encapsulation Mechanisms | Draft SP 800-227 is Available for Comment

Deirdre Connolly

unread,
Jan 14, 2025, 2:06:07 PMJan 14
to Moody, Dustin (Fed), Mike Ounsworth, pqc-forum
> We are working on an update of SP 800-133 that would allow for [seeds to come from a KDF (instead of an approved DRBG)]

This is excellent! 🎉

Looking at FIPS 203, there is no mention of 800-133 (maybe this is normal), and it does say:

These random bytes shall be generated using an approved RBG, as prescribed in SP 800-90A, SP 800-90B, and SP 800-90C.

And of course that shall is the strongest requirement to be in compliance with the standard. Are there any plans to revise FIPS 203 to point at SP 800-133 as well? Figure I'd ask.

COSTA Graham

unread,
Jan 15, 2025, 5:11:37 AMJan 15
to Deirdre Connolly, Moody, Dustin (Fed), Mike Ounsworth, pqc-forum

THALES GROUP LIMITED DISTRIBUTION to email recipients

 

Perhaps this is something else for NIST to add to their Errata spreadsheet now published with each standard?

 

Linking back to an earlier thread, I’d still strongly suggest similar is done to formalize statements made on this mailing list linked to permitted external-mu calculation and FIPS 204.

John Mattsson

unread,
Jan 28, 2025, 8:25:10 AMJan 28
to dustin...@nist.gov, pqc-forum, dustin...@nist.gov

Thanks for arranging this!

I think it is great to have a workshop to discuss KEMs and everything related to them.

We just submitted the following paper to the workshop

https://emanjon.github.io/Publications/ML-KEM%20is%20Great!%20What's%20Missing.pdf


We focused mostly on high level topics in the paper. We will provide more detailed comments and suggestions in comments on March 7.

Cheers,
John

--

You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

dustin...@nist.gov

unread,
Feb 19, 2025, 8:23:49 AMFeb 19
to pqc-forum, dustin...@nist.gov
The virtual NIST Workshop on Guidance for KEMs is next week, on Feb 25-Feb 26, focusing on SP 800-227.  The workshop aims to facilitate discussions on NIST's guidance for KEMs.  The agenda for the workshop has been posted:


You can register at that same page.  As a reminder, the deadline for comments on draft SP 800-227 is March 7, 2025.

Thanks,

Dustin Moody

Mike Ounsworth

unread,
Feb 19, 2025, 11:40:54 AMFeb 19
to Cas Cremers, pqc-forum, Moody, Dustin (Fed), Dax, Alexander, Niklas Medinger
Hi Cas & Alex & Niklas — and Dustin, Goran and team,

In SP 800-277-ipd, just below Fig. 4. there is an "Additional considerations: The steps 1-5 in the key establishment process above might 727 need to be modified depending on the security and functionality needs of the application."

There is already one paragraph about static versus ephemeral. Would it satisfy the comment if a paragraph was added about Binding Properties, and leave it at that?

---

Mike Ounsworth

 



From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> on behalf of Cas Cremers <cas.c...@gmail.com>
Sent: Wednesday, January 8, 2025 10:00 AM
To: pqc-forum <pqc-...@list.nist.gov>
Cc: Moody, Dustin (Fed) <dustin...@nist.gov>; Dax, Alexander <alexan...@cispa.de>; Niklas Medinger <niklas....@cispa.de>
Subject: [EXTERNAL] [pqc-forum] Re: Recommendations for Key-Encapsulation Mechanisms | Draft SP 800-227 is Available for Comment
 
WARNING: This Message May Be From An Untrusted Sender
You have not recently corresponded with this sender.
 
--

You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/712484e3-7616-49ff-9108-13fb02fd6b27n%40list.nist.gov.
Any email and files/attachments transmitted with it are intended solely for the use of the individual or entity to whom they are addressed. If this message has been sent to you in error, you must not copy, distribute or disclose of the information it contains. Please notify Entrust immediately and delete the message from your system.

赵运磊

unread,
Feb 21, 2025, 8:12:34 AMFeb 21
to dustin...@nist.gov, pqc-forum

I have a concern about "6.2.1. Hybrid Public-Key Encryption (HPKE)", pages 34-35 in Draft SP 800-227. 


For HPKE to be CCA-secure, the common method is to compose a CCA-secure KEM with a CCA-sexure symmetric-key encryption (SKE) scheme. Now we consider a typical construction of CCA-secure symmetric-key encryption that composes a CPA-secure SKE and a MAC scheme. We assume the key used by CPA-secure SKE is K1, and that used by MAC is K2.


With ML-KEM, the 256-bit random seed $s$  is used to derive a 256-bit key $K$ encapsulated and a random string used in FO-transformation.  For the above CCA-secure HPKE, the actual entropy of K1 or K2 may be much lesser than 256. Actually, the entropy of K1 or K2 is double decreased: first decreased from $s$ to $K$, and the second entropy decrease is from $K$ to $K1$ or $K2$.


Would this cause quantum threat to the underlying CPA-secure SKE or MAC in reality? For example, the MAC-key K2 can be viewed as like  H(counter.H(counter,seed)), the collision resistance is particularly reduced, which may in turn ruin the security of MAC in reality. 


In RO model, entropy reduction may not be a problem, but reduction in collision resistance might be a problem in reality. Consider the following attack: (1) We first make the following assumption, each length doubling will cause half entropy reduction reality. That is, with the original random seed of entropy of 256, the entropy of K is about 128, and the entropy of K1 or K2 is about 64 in reality; (2) With this assumption, could the underlying SKE and MAC still provide classic 256 or even 128 bit security? An adversary might take time much lesser than 2^128 to have key collision with K1 or K2. In comparison, with traditional public-key cryptography like RSA, one could encrypt a random seed of 512 bits or even longer, and can use random seeds of varying lengths according to different security levels.



With this in mind, NIST may consider seeking for future KEMs that allow  CPA-encrypting seed of varying length.


Yours sincerely

Yunlei





-----原始邮件-----
发件人: "'dustin...@nist.gov' via pqc-forum" <pqc-...@list.nist.gov>
发送时间: 2025-02-19 21:23:49 (星期三)
收件人: pqc-forum <pqc-...@list.nist.gov>
抄送: "dustin...@nist.gov" <dustin...@nist.gov>
主题: [pqc-forum] Re: Recommendations for Key-Encapsulation Mechanisms | Draft SP 800-227 is Available for Comment
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

dustin...@nist.gov

unread,
Mar 5, 2025, 9:36:48 AMMar 5
to pqc-forum, dustin...@nist.gov
All,

Just a reminder that the deadline for comments on the initial public draft of NIST Special Publication (SP) 800-227, Recommendations for Key-Encapsulation Mechanisms, is this Friday, March 7, 2025.  

NIST recently published Federal Information Process Standard (FIPS) 203, Module-Lattice-Based Key-Encapsulation Mechanism Standard, to update its cryptographic standards with an algorithm designed to provide protection from quantum attacks. In addition, NIST will select one or two additional quantum-resistant key-encapsulation mechanisms (KEMs) for standardization. To provide guidance on using KEMs, NIST is introducing SP 800-227, Recommendations for Key-Encapsulation Mechanisms. This draft document describes the basic definitions, properties, and applications of KEMs. It also provides recommendations for implementing and using KEMs in a secure manner.

See the publication details for a copy of the draft and instructions for submitting comments.

Dustin

On Tuesday, January 7, 2025 at 10:15:23 AM UTC-5 dustin...@nist.gov wrote:

John Mattsson

unread,
Mar 7, 2025, 1:12:32 PMMar 7
to pqc-forum

Hi,

Here are the comments Ericsson submitted on SP 800-227 ipd
https://emanjon.github.io/NIST-comments/2025%20-%20SP%20800-227.pdf

 

“We think SP 800-227 should discuss binding properties [1–2], private key storage formats [3], implicit and explicit rejection [4], and anonymity and robustness of KEM-DEM constructions [5]. We think future KEMs should exhibit strong binding properties (e.g., MAL-BIND-K-CT, MAL-BIND-K-PK, etc.), ensure anonymity and robustness in KEM-DEM constructions, use a 32-byte seed as the private key, and be explicitly rejecting. While committing AEADs imposes significant performance costs, KEM binding properties can be achieved with a small performance impact. We do not believe implicit rejection offers practical security benefits in systems—on the contrary, it merely defers the problem, assuming that later steps will properly handle the rejection. This places an additional burden on application correctness, which is disadvantageous. As shown in [4], explicit rejection is as secure as implicit rejection. Moreover, implicit rejection excludes certain binding properties and introduces additional performance overhead.”

 

We cannot see any theoretical or practical benefits with implicit rejection, are there any?

 

Cheers,

John

D. J. Bernstein

unread,
Mar 9, 2025, 6:46:29 PMMar 9
to pqc-...@list.nist.gov
'John Mattsson' via pqc-forum writes:
> We cannot see any theoretical or practical benefits with implicit
> rejection, are there any?

See attached.

---D. J. Bernstein
kem.pdf
signature.asc

Daniel Apon

unread,
Mar 9, 2025, 11:50:45 PMMar 9
to pqc-...@list.nist.gov
*sigh* In response to the Ericsson Telecommunications service provider & Corporation:

Dan Bernstein is correct. Implicit rejection offers many, strong, practical, real-world security guarantees that cannot be achieved by an explicit-rejection protocol.

For example, https://eprint.iacr.org/2021/708.pdf

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Bobby McGee

unread,
Mar 10, 2025, 10:37:47 AMMar 10
to pqc-forum, Daniel Apon
Instead of *sighs* and litanies of references (which I will look at, because I still don't understand), how about an ELI5:  Why is implicit rejection important in a practical sense?

Here is the naive view.  Suppose we have a failed key exchange with implicit rejection.  As soon as two different derived keys are used for any purpose, both parties will know that the exchange failed.  Hence implicit rejection seems to do nothing.

If there is something fundamentally wrong with this view, surely there is a short, transparent, and convincing argument making that obvious?

[A second slightly-off-topic question I have is:  Does the implicit rejection key itself become a target for attackers?]

D. J. Bernstein

unread,
Mar 10, 2025, 3:47:07 PMMar 10
to pqc-...@list.nist.gov
Bobby McGee writes:
> how about an ELI5

Your trains are more fun to play with when they aren't broken, right?

A long, long time ago, in the late 20th century, people built a new
choo-choo train called the N-CHOO train. At first they were trying
N-CHOO on straight tracks. But then they tried it on curves and it fell
apart. Completely broken! People were very unhappy with N-CHOO. They
said N-CHOO was unusable and no fun.

But then, years later, people tried replacing the glue in N-CHOO with
stronger glue. Suddenly the train could handle the curves!

The old broken glue is called Explicit Rejection Glue. The new stronger
glue is called Implicit Rejection Glue. Strong glue is important!

Implicit Rejection Glue is also smoother than Explicit Rejection Glue.
Explicit Rejection Glue has sharp edges and people can accidentally poke
themselves. Everybody is using Implicit Rejection Glue now.

There's a very young train built about the same time you were born. That
train is called BRR-BRR. The train is really cold. Maybe the cold would
make Explicit Rejection Glue strong enough. Or maybe it would shatter.
Implicit Rejection Glue is definitely strong and doesn't have the sharp
edges. BRR-BRR uses Implicit Rejection Glue. Better safe than sorry!

> Here is the naive view. Suppose we have a failed key exchange with
> implicit rejection. As soon as two different derived keys are used
> for any purpose, both parties will know that the exchange failed.
> Hence implicit rejection seems to do nothing.

Hmmm, I guess we're not talking about 5-year-olds now. :-)

Many successful attacks against public-key cryptosystems involve
sending modified ciphertexts and studying the following pattern:

* Some ciphertexts produce dec failures (i.e., dec says "reject").
* Some ciphertexts are accepted by dec (producing random-looking
session keys).

The attack details (depending on the cryptosystem details) then work
backwards from this fail-or-not pattern to recover internal secrets:
sometimes recovering plaintexts, sometimes recovering secret keys.

https://cr.yp.to/papers.html#ntrw includes an attack demo of this type
for NTRU-HRSS, and cites many earlier attacks of this type.

These attacks break typical applications of the cryptosystems. Typically
a dec rejection produces a fast application rejection, while a random
dec result produces a slower application rejection (delayed until, e.g.,
the moment that a MAC is checked). Typically the attacker can measure
this difference in timing, even when the application isn't sending
explicit "error 573: PKE rejected this ciphertext" messages.

Implicit rejection stops these attacks. The way implicit rejection does
this is by converting PKE dec failures into pseudorandom KEM session
keys. Those session keys look just like the pseudorandom session keys
from successful PKE dec. The pattern of internal PKE dec failures is
hidden. There are no KEM failures.

Implicit rejection doesn't change the fact you mentioned: when PKE dec
fails, a typical application generates failures. The point of implicit
rejection is that the application ends up generating the same type of
failures for the attacker's other modified ciphertexts too, so the
attacker can't see which ciphertexts are triggering PKE failures.

With explicit rejection, this PKE fail-or-not distinction is instead
released to the application, which typically releases it to the
attacker. Again, this is exactly what various known attacks exploit.

There are other virtues of implicit rejection (see the PDF I posted),
but the fact that it blocks various known attacks---rescuing NTRU, for
example---is the most obvious reason for using it.

---D. J. Bernstein
signature.asc

Blumenthal, Uri - 0553 - MITLL

unread,
Mar 10, 2025, 4:05:22 PMMar 10
to pqc-...@list.nist.gov
Re. KEM returning “reject” vs. random key in case of failure: aren’t the two essentially the same from the adversary point of view, at least in practical use? Except that one Informs the attacker quicker, and the other one forced to wait until the Key Confirmation completes?


Regards,
Uri

Secure Resilient Systems and Technologies
MIT Lincoln Laboratory

> On Mar 10, 2025, at 15:47, D. J. Bernstein <d...@cr.yp.to> wrote:
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20250310194648.396791.qmail%40cr.yp.to.
> <signature.asc>

Simo Sorce

unread,
Mar 10, 2025, 4:46:08 PMMar 10
to Blumenthal, Uri - 0553 - MITLL, pqc-...@list.nist.gov
When implemented correctly (ie in a side-channel silent way) the two
are quite different.

First of all the attacker does not get timing information which is
important, because with explicit rejection generally timing reveals a
lot of where the fault is occurring opening avenues to reveal secret
state.

Secondly the attacker can't tell what kind of failure occurred.
Did the cryptosysytem fail completely and returned a pseudo-randomly
generated key? Or did the attacker succeed in generating ciphertext
that actually results in a valid key exchange ?

Finally it may (depending on the operation/protocol/application) force
the attacker to spend the same amount of resources the target spends in
order to verify whether the attack caused a failure or not, because it
forces the attacker to complete the whole handshake/exchange in order
to get the reply.

HTH,
Simo.

On Mon, 2025-03-10 at 20:05 +0000, Blumenthal, Uri - 0553 - MITLL
wrote:
--
Simo Sorce
Distinguished Engineer
RHEL Crypto Team
Red Hat, Inc

Blumenthal, Uri - 0553 - MITLL

unread,
Mar 10, 2025, 6:30:14 PMMar 10
to Simo Sorce, pqc-...@list.nist.gov

When implemented correctly (ie in a side-channel silent way) the two
are quite different.

First of all, the attacker does not get timing information which is


important, because with explicit rejection generally timing reveals a
lot of where the fault is occurring opening avenues to reveal secret
state.

True, though it seems that in the above context, timing would matter only for “local” attacker that can “physically” affect the processing device.

Secondly the attacker can't tell what kind of failure occurred.
Did the cryptosysytem fail completely and returned a pseudo-randomly
generated key? Or did the attacker succeed in generating ciphertext
that actually results in a valid key exchange ?

I’m afraid I don’t understand the above. What role for the attacker does the above example assign? If the attacker can inject traffic – then sooner or later she’ll learn whether the given attempt was successful or not, right?

Finally, it may (depending on the operation/protocol/application) force


the attacker to spend the same amount of resources the target spends in
order to verify whether the attack caused a failure or not, because it
forces the attacker to complete the whole handshake/exchange in order
to get the reply.

Oh yes – in “my” scenario, the attacker has to complete the entire handshake. But that’s what I meant when I said “one informs the attacker quicker, and the other one forces to wait till KC completes”.

Simo Sorce

unread,
Mar 10, 2025, 7:38:15 PMMar 10
to Blumenthal, Uri - 0553 - MITLL, pqc-...@list.nist.gov
On Mon, 2025-03-10 at 22:30 +0000, Blumenthal, Uri - 0553 - MITLL
wrote:
> When implemented correctly (ie in a side-channel silent way) the two
> are quite different.
>
> First of all, the attacker does not get timing information which is
> important, because with explicit rejection generally timing reveals a
> lot of where the fault is occurring opening avenues to reveal secret
> state.
>

>
> True, though it seems that in the above context, timing would matter only for “local” attacker that can “physically” affect the processing device.

Definitely not a matter only for a local attacker, my coworker
published this paper less than 2 years ago:
https://people.redhat.com/~hkario/marvin/marvin-attack-paper.pdf

We got better since then at measuring the side-channels remotely, and
the only practical cure, for those that can't avoid using PKCS1.5
padding, is to implement implicit rejection.

We routinely perform regression testing for this and other side-channel
issues over the network, and we can measure extremely small side
channels with the correct statistical tools even with seemingly
relatively large noise.

Fundamentally there rarely is a timing side-channel that can't be
measured over a network if you have an appropriate oracle that allows
you to perform large numbers of attempts (for example a TLS server).

> Secondly the attacker can't tell what kind of failure occurred.
> Did the cryptosysytem fail completely and returned a pseudo-randomly
> generated key? Or did the attacker succeed in generating ciphertext
> that actually results in a valid key exchange ?

> I’m afraid I don’t understand the above. What role for the attacker does the above example assign? If the attacker can inject traffic – then sooner or later she’ll learn whether the given attempt was successful or not, right?

Each cryptosystem is different and most are not as broken as RSA with
certain padding, but see the paper above and its references for the
worst case of how bad explicit rejection can go.

HTH.
Simo.

D. J. Bernstein

unread,
Mar 11, 2025, 9:39:49 AMMar 11
to pqc-...@list.nist.gov
Blumenthal, Uri - 0553 - MITLL writes:
> Re. KEM returning “reject” vs. random key in case of failure: aren’t
> the two essentially the same from the adversary point of view, at
> least in practical use?

No. An attacker paying attention to the cryptosystem details will send
modified ciphertexts, will look at which ciphertexts produce "reject"
and which ciphertexts instead give keys, and will use this information
(exactly the information that implicit rejection would have kept secret)
to try to compute plaintexts, private keys, etc. See

https://cypherpunks.ca/~iang/pubs/paper-reaction-attacks.pdf

for some examples of cryptosystems where this works, quickly breaking
IND-CCA2 and breaking typical applications. These attacks are examples
of what cryptographers call "chosen-ciphertext attacks".

---D. J. Bernstein
signature.asc

Blumenthal, Uri - 0553 - MITLL

unread,
Mar 11, 2025, 9:56:57 AMMar 11
to D. J. Bernstein, pqc-...@list.nist.gov

> Re. KEM returning “reject” vs. random key in case of failure: aren’t

> the two essentially the same from the adversary point of view, at

> least in practical use?

 

No. An attacker paying attention to the cryptosystem details will send

modified ciphertexts, will look at which ciphertexts produce "reject"

and which ciphertexts instead give keys, and will use this information

 

What I’m failing to understand is how this is different from “…will look at the result of the handshake to see which ciphertexts resulted in valid (confirmed) keys, and which ones produced garbage keys”? Except that a complete handshake will likely take more time than “reject” in the middle of the process?

 

See

 

 

Thanks – will do (actually, pulled, and am reading).

 

TNX

 

Joost Renes

unread,
Mar 11, 2025, 10:27:38 AMMar 11
to Blumenthal, Uri - 0553 - MITLL, d...@cr.yp.to, pqc-...@list.nist.gov

Hi,

 

We cannot point to insecure instantiations of explicit rejection and conclude that explicit rejection is insecure in general.

 

For each implementation of implicit rejection there is an explicit counterpart.

Namely, compute the binary value that determines whether to implicitly reject or not, and make it public.

For ML-KEM, this would be Line 9 in Algorithm 18 and revealing the outcome of the comparison c = c’.

Conversely, each implementation of explicit rejection has an implicit counterpart (namely, compute the binary value and instead of releasing it, return a random key).

 

The difference in information between the implicit & explicit counterparts to the attacker is 1 bit – whether or not rejection has taken place.

This appears to be the same bit of information that an attacker would obtain through plaintext confirmation.

So the question is whether revealing this bit of information as part of the KEM is really an issue, in particular when plaintext confirmation is used.

 

I do not have a strong opinion one way or the other at this point, as implicit rejection may indeed be harder to get wrong, but I would not go as far as to say explicit rejection (as implemented above) is less secure in all scenarios..?

 

KR,

Joost

--

You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Bobby McGee

unread,
Mar 11, 2025, 10:58:43 AMMar 11
to pqc-forum, Joost Renes, Blumenthal, Uri - 0553 - MITLL, d...@cr.yp.to
> The point of implicit rejection is that the application ends up generating the same type of failures for the attacker's other modified ciphertexts too, so the attacker can't see which ciphertexts are triggering PKE failures.

> Secondly the attacker can't tell what kind of failure occurred. Did the cryptosystem fail completely and returned a pseudo-randomly generated key? Or did the attacker succeed in generating ciphertext that actually results in a valid key exchange?


> No. An attacker paying attention to the cryptosystem details will send modified ciphertexts, will look at which ciphertexts produce "reject" and which ciphertexts instead give keys, and will use this information

So, if I'm following, there exist attacks using modified ciphertext whose validity is unknown, and implicit rejection prevents deciding between the cases

- modified ciphertext invalid and
- modified ciphertext valid (but possibly associated with different plaintext if the original plaintext was unknown)

since the eventual "failure" resulting from different derived keys could be attributed either to implicit rejection or to acceptance (of a different key or the key associated with the original plaintext if the plaintext wasn't known in the first place)?

I guess I've always thought that the re-encryption part of the FO transform was doing the heavy lifting against chosen-ciphertext attacks, e.g. protecting against simple algebraic tricks picking off the secret key, and that implicit rejection was some kind of theoretical formality.  [And it seems I'm not alone in my misapprehension?]

Simo Sorce

unread,
Mar 11, 2025, 11:24:16 AMMar 11
to Joost Renes, Blumenthal, Uri - 0553 - MITLL, d...@cr.yp.to, pqc-...@list.nist.gov
On Tue, 2025-03-11 at 14:27 +0000, Joost Renes wrote:
> For ML-KEM, this would be Line 9 in Algorithm 18 and revealing the outcome of the comparison c = c’.
> Conversely, each implementation of explicit rejection has an implicit counterpart (namely, compute the binary value and instead of releasing it, return a random key).

I would like to point out that computing a random value is ok only for
non-deterministic algorithms.

For deterministic algorithms computing a random value may give away the
protection of implicit rejection and instead a pseudo-random value
should be generated such that 2 attempts with the same bad ciphertext
return the same pseudo-random value back.

Christopher J Peikert

unread,
Mar 11, 2025, 11:54:20 AMMar 11
to pqc-...@list.nist.gov
Security under chosen-ciphertext attack (IND-CCA2) has been a primary requirement for NIST PQC KEMs from the start. The cryptosystems considered in the above link would also fail to have IND-CCA2 if they used implicit rejection, so they do not shed any light on the question of rejection type for NIST PQC.

One can get IND-CCA2 using either implicit or explicit rejection. Indeed, Fujisaki-Okamoto as used in NIST PQC KEMs can use either method. So the question is just what other properties might be gained or lost from the choice of rejection method.

Regarding the earlier claim "Implicit rejection offers many, strong, practical, real-world security guarantees that cannot be achieved by an explicit-rejection protocol. For example, https://eprint.iacr.org/2021/708.pdf" -- that's not what this work shows. To the contrary, it leaves explicit rejection on stronger ground for the properties the work considers. It was able to transfer some -- but not all -- prior results for explicit-rejection schemes to their implicit-rejection analogs.

(Representative quotes: "KEMs that perform implicit rejection do not in general transfer anonymity and robustness to PKEs obtained via the KEM-DEM paradigm from the KEM component, whilst KEMs that offer explicit rejection, and that also satisfy a mild robustness property, do" and "an implicit rejection KEM cannot be robust.")

Sincerely yours in cryptography,
Chris

Daniel Apon

unread,
Mar 11, 2025, 1:48:25 PMMar 11
to pqc-forum, Christopher J Peikert
Oh, you're right.

In the greatest tradition of this forum:

I hereby formally retract my claims, assertions, opinions, and emotive expressions in my email sent Mar 9, 2025, 11:50:45 PM, Washington, D.C. time.

Thanks for catching my errors, Chris!
--Daniel

D. J. Bernstein

unread,
Mar 12, 2025, 4:05:57 PMMar 12
to pqc-...@list.nist.gov
Bobby McGee writes:
> So, if I'm following, there exist attacks using modified ciphertext whose
> validity is unknown, and implicit rejection prevents deciding between the
> cases
> - modified ciphertext invalid and
> - modified ciphertext valid (but possibly associated with different
> plaintext if the original plaintext was unknown)
> since the eventual "failure" resulting from different derived keys could be
> attributed either to implicit rejection or to acceptance (of a different
> key or the key associated with the original plaintext if the plaintext
> wasn't known in the first place)?

Right. As an example, let's look at a chosen-ciphertext attack against
Niederreiter's compressed version of the original McEliece system.

This attack isn't difficult. The basic point is simply that if you have
a bit vector of weight B (i.e., the vector is 1 at B positions, and 0 at
all other positions), and you flip a bit (changing 0 to 1 or 1 to 0),
then the result has weight B+1 if the bit you flipped was originally 0,
and B-1 otherwise.

(There are also many papers on lattice attacks that exploit explicit
rejection; those have a similar flavor and can sometimes even recover
Alice's private key. But the following attack is easier.)

System description first:

* Alice's public key K is a bit matrix. For concreteness, let's
assume this is a 1024 x 500 matrix. (I'm taking McEliece's 1978
example of sizes estimated as 2^64 security, and then applying
Niederreiter compression, which doesn't affect security.)

* To encrypt, Bob starts with a secret length-1024 bit vector e of
weight 50 (so e is 1 at 50 positions and 0 at 974 positions).
The ciphertext is simply the product eK, a bit vector of length
500. Matrix arithmetic is carried out modulo 2.

* Alice has a decryption algorithm that, using the private key, can
work backwards from eK to e. The details don't matter for the
attack below.

Assumptions about Alice and Bob trying hard to stop attacks:

* Let's assume that Bob chooses e completely at random among all
vectors of weight 50, and uses e only once.

* Let's assume that Alice rejects the ciphertext if the decryption
algorithm fails to come up with e, or if the resulting e doesn't
have weight exactly 50, or if multiplying by K doesn't give the
ciphertext back. (This is reencryption plus explicit rejection.)

* Let's assume that Alice and Bob hash e with a standard hash
function to obtain a session key used for authenticated encryption
of whichever message is being sent. (Public-key encryption plus
the hashing is a KEM; the authenticated cipher is a DEM.)

* Let's assume that Alice and Bob are including the ciphertext eK in
the hash input, so that different ciphertexts will obviously
produce different session keys.

Aside from 2^64 being too small, this sounds like best practices in
stopping chosen-ciphertext attacks, right? Well, no, let's break it.

The attacker sees Bob's ciphertext eK. Notice that the following are all
equivalent since we're doing arithmetic modulo 2:

* flipping a bit at the first position in e;

* replacing e with e+f, where f is the vector (1,0,0,...,0);

* replacing eK with eK+fK.

As a warmup for the attack, imagine what will happen if the attacker
computes eK+fK and sends this to Alice. There are now two possibilities:

* Maybe the first position in e was 1. Then flipping that bit
produces 0, so the modified vector has weight only 49.

* Maybe the first position in e was 0. Then flipping that bit
produces 1, so the modified vector has weight 51.

Either way, Alice rejects the ciphertext, because decryption is looking
for weight exactly 50. So far the attack isn't accomplishing anything
useful for the attacker.

Actually, it's conceivable that the 51 case ends up being decrypted as a
different 50 vector rather than being rejected, but there's theory and
experiment saying that the attacker can't reasonably hope for that to
happen. So let's move on to a more powerful attack.

What the attacker actually does is flip the bits at the first _two_
positions in e. Now there are three possibilities:

* Maybe those first two positions in e were (1,1), so flipping them
gives (0,0), and overall weight 48. Alice rejects this.

* Maybe those first two positions in e were (0,0), so flipping them
gives (1,1), and overall weight 52. Alice rejects this.

* Maybe those first two positions in e were (0,1) or (1,0), so
flipping them gives overall weight 50. Alice _accepts_ this!

In the acceptance case, Alice ends up with a different session key, and
the attacker has no idea what that session key is going to be, so the
attacker's forged MAC is going to be rejected. But, in the IND-CCA2
attack model, the attacker can see the difference between

* decryption returning "reject" and

* decryption returning some session key.

There are also endless examples of applications that give away the
same distinction through error messages or timing. For example, people
often hear that good software is supposed to check for bad inputs and
immediately reject those inputs with an informative error message, such
as "public-key decryption failed" or "AES-GCM decryption failed".

Anyway, in IND-CCA2 or any other scenario where the attacker can see how
decryption reacts, the attacker now

* sees that decryption returns "reject", and concludes that the
first two positions in e are the same, or

* sees that decryption returns some session key, and concludes that
the first two positions in e are different.

The attacker then repeats this process with other choices of positions,
and it's an easy exercise to quickly extract the whole secret vector
from this information. Done with the attack.

Now let's see how this attack is blocked if we add implicit rejection,
item 4 in the list of best practices for chosen-ciphertext security in
https://cr.yp.to/talks/2018.02.01/slides-djb-20180201-mceliece-a4.pdf.

Implicit rejection means that, when the KEM sees decryption failing (not
successfully finding e, or not passing the checks on e), it doesn't
return an explicit "reject": instead it returns a random-looking session
key, specifically a hash of a secret and the ciphertext.

This directly addresses the information leak exploited in the attack.
This eliminates part of the IND-CCA2 attack surface: it makes IND-CCA2
more likely to be achieved (in particular, rescuing some systems that we
know are broken with explicit rejection) and simplifies security review.

Of course, the software also has to be carefully written to avoid
leaking the same information via timing, but now this job is handled
_inside the KEM software_, which gives us a chance of getting it right,
whereas it's hopeless to ask an endless list of applications to hide
the "reject" information. See the kem.pdf that I posted for further
advantages of implicit rejection.

> I guess I've always thought that the re-encryption part of the FO
> transform was doing the heavy lifting against chosen-ciphertext
> attacks, e.g. protecting against simple algebraic tricks picking off
> the secret key,

The chosen-ciphertext attacks in the literature sometimes exploit
failures to reencrypt, yes, but that's far from the only known way for
chosen-ciphertext security to fail. The list of best practices mentioned
above has 6 different items!

In the attack example above, Alice isn't saved by reencryption; the
attack instead exploits explicit rejection. The same comment applies to
many lattice attacks in the literature. For example, ntruhrss701 would
be broken if it changed from implicit rejection to explicit rejection;
it isn't saved by reencryption.

> and that implicit rejection was some kind of
> theoretical formality. [And it seems I'm not alone in my
> misapprehension?]

Section 3 of https://cr.yp.to/papers.html#ntrw surveys these attacks and
defenses (not just implicit rejection), but there are many papers that
focus purely on proofs and that miss the extra information available
from attacks.

Of course, describing "proofs of security" as analyzing all attacks at
once makes them sound strictly more powerful than analyses of single
attacks. But the reality is that what people actually prove is much
weaker than that, so looking at attacks is essential.

Having said that: It's actually possible to see some of the virtues of
implicit rejection through a proof lens, very carefully comparing what
has been proven for implicit rejection to what has been proven for
explicit rejection. The normal proof conclusions in this area are either
ROM IND-CCA2 or QROM IND-CCA2; for either of those conclusions, the
proofs for explicit rejection make more restrictive assumptions than the
proofs for implicit rejection. There's also an easy proof saying that
the other way around cannot happen.

Unfortunately, this sort of comparison is labor-intensive and can easily
go wrong if hypotheses are understated, if conclusions are overstated,
or if there are proof errors. See https://eprint.iacr.org/2019/1336 for
many examples of these phenomena.

That's also what happened here. https://eprint.iacr.org/2025/062 gave an
explicit-rejection theorem with

(1) various restrictions that are missing from the paper's summary
of what was proven, and

(2) further restrictions that appeared in the paper's summary but
disappeared in the summary that started this thread.

See the kem.pdf that I posted for quotes. The errors could have been
stopped by double-checks against attacks, or by the rule from
https://www.di.ens.fr/~stern/data/St101.pdf that proofs "need time to be
validated through public discussion".

---D. J. Bernstein
signature.asc

D. J. Bernstein

unread,
Mar 12, 2025, 4:42:03 PMMar 12
to pqc-...@list.nist.gov
Joost Renes writes:
> We cannot point to insecure instantiations of explicit rejection and
> conclude that explicit rejection is insecure in general.

Of course. The big picture of known IND-CCA2 attacks is that there are

* some systems where IND-CCA2 is unbroken whether the rejection is
implicit or explicit,

* some systems where IND-CCA2 is unbroken when the rejection is
implicit and broken if the rejection is explicit, and

* some systems where IND-CCA2 is broken whether the rejection is
implicit or explicit.

Systems can move down this list as new attacks are discovered. As an
extreme example, the claimed IND-CCA2 for sikep751 is now known to be
breakable in seconds, whether the rejection is implicit or explicit.

> The difference in information between the implicit & explicit
> counterparts to the attacker is 1 bit – whether or not rejection has
> taken place.

Rejection from the underlying PKE, yes. The explicit-rejection attacks
that recover many secret bits (from a PKE plaintext and thus the KEM
session key, or from a PKE/KEM private key) typically use thousands of
modified ciphertexts.

(IND-CCA2 for PKEs considers the scenario of a plaintext having just one
secret bit; this is sometimes breakable with just one ciphertext, but is
a corner case. Historically, what woke people up to the real-world
applicability of chosen-ciphertext attacks was an attack by
Bleichenbacher known as the million-message attack.)

With explicit rejection, the modified ciphertexts that trigger PKE
rejection turn into rejection from the KEM and then fast rejection from
a typical application, whereas the other chosen ciphertexts from the
attacker turn into session keys from the KEM and then slow rejection
from the application. These attacks work backwards from the fast/slow
pattern to secrets, breaking IND-CCA2 and breaking application security.

With implicit rejection, all of the modified ciphertexts turn into
session keys from the KEM and then slower rejection from the
application, so this leak disappears.

> This appears to be the same bit of information that an attacker would
> obtain through plaintext confirmation.

Assuming you mean plaintext confirmation in the surrounding protocol,
such as checking a MAC: No, an implicit-rejection KEM hides the
rejection that it sees from its internal PKE, so the surrounding
protocol also no longer gives away the information.

---D. J. Bernstein
signature.asc

Joost Renes

unread,
Mar 12, 2025, 4:59:09 PMMar 12
to d...@cr.yp.to, pqc-...@list.nist.gov
Hi Dan,

> Rejection from the underlying PKE, yes.
> With explicit rejection, the modified ciphertexts that trigger PKE rejection turn into rejection from the KEM
> No, an implicit-rejection KEM hides the rejection that it sees from its internal PKE

We are not using the same definition of "explicit rejection" in that case.
I understand explicit rejection to mean an explicit binary return from the KEM that says whether the ciphertext has been rejected or not.
This does not necessarily equal the PKE rejection, and in fact in many case should not equal it, as there are indeed many examples of active attacks if it is.
For example, for ML-KEM the binary return value would inform whether re-encryption has succeeded, not whether Decrypt has succeeded.

For any attacker Bob that performs the encryption/encapsulation step (like in your example in the other email) the correct session key is known to Bob.
This means Bob can distinguish between a random-looking session key (KEM "reject") and the correct session key (KEM "accept").
Hence Bob learns exactly the same information, no matter if Alice explicitly or implicitly rejects.

KR,
Joost

-----Original Message-----
From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> On Behalf Of D. J. Bernstein
Sent: Wednesday, March 12, 2025 3:42 PM
To: pqc-...@list.nist.gov
Subject: Re: [EXT] Re: [pqc-forum] Re: Recommendations for Key-Encapsulation Mechanisms | Draft SP 800-227 is Available for Comment

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20250312204142.544040.qmail%40cr.yp.to.

Christopher J Peikert

unread,
Mar 12, 2025, 6:30:39 PMMar 12
to pqc-...@list.nist.gov
On Wed, Mar 12, 2025 at 4:05 PM D. J. Bernstein <d...@cr.yp.to> wrote:
Bobby McGee writes:
> So, if I'm following, there exist attacks using modified ciphertext whose
> validity is unknown, and implicit rejection prevents deciding between the
> cases
> - modified ciphertext invalid and
> - modified ciphertext valid (but possibly associated with different
> plaintext if the original plaintext was unknown)
> since the eventual "failure" resulting from different derived keys could be
> attributed either to implicit rejection or to acceptance (of a different
> key or the key associated with the original plaintext if the plaintext
> wasn't known in the first place)?

Right. As an example, let's look at a chosen-ciphertext attack against
Niederreiter's compressed version of the original McEliece system.

The attack you've described isn't against N's version of M's original system. It's against a version with an additional, very particular, collection of not-good-enough countermeasures against chosen-ciphertext attacks.

N's version of M's original system, along with the cryptosystems in the paper you previously linked, are easily seen not to be IND-CCA2 secure, whether they use implicit or explicit rejection.
 
Assumptions about Alice and Bob trying hard to stop attacks:

    ... this sounds like best practices in
stopping chosen-ciphertext attacks, right?

No, these aren't best practices in stopping CCA attacks -- they don't match the Fujisaki-Okamoto transform, nor (more relevant to this setting) Dent's version for transforming deterministic one-way encryption into a CCA-secure KEM. Fully stopping CCA attacks is very subtle, so why should we expect this ad-hoc collection of countermeasures to do so?

The deficiency is that nothing ensures that "e" (the random input to the one-way encryption) is "authentic." Requiring this is a basic goal of the re-encryption paradigm; we want to ensure that whoever produced the ciphertext must have "known" the encryption algorithm's random input.

FO and Dent ensure an "authentic e" by requiring "e" to be a certain hash output. Another way is to include a hash of "e" in the ciphertext. Both are conspicuously missing here.

Implicit rejection comes to the rescue in this specific case (when e isn't guaranteed to be authentic) to give IND-CCA2. But that doesn't make implicit better than explicit in any general sense; it just happens to fix this particular (otherwise bad) passive-to-CCA transform.

Arguably, we might want "authentic e" -- often called "plaintext knowledge" -- as a KEM security property on top of IND-CCA2. As this example shows, implicit rejection can smooth over a lack of plaintext knowledge, whereas explicit rejection can be used to ensure plaintext knowledge.

D. J. Bernstein

unread,
Mar 12, 2025, 6:49:49 PMMar 12
to pqc-...@list.nist.gov
Joost Renes writes:
> We are not using the same definition of "explicit rejection" in that
> case.

Could be; so, yes, let's make the definitions, um, explicit. :-)

What I mean by "implicit rejection" is exactly the transformation
ImplicitRejection defined in https://cr.yp.to/papers.html#tightkem. This
is factored out of the transforms defined in the earlier HHK paper (and
used in many KEM proposals) that are informally labeled as using
"implicit rejection", so the terminology is compatible.

For "explicit rejection", what I mean is the alternative briefly
described in https://cr.yp.to/papers.html#tightkem at the end of Section
15, which again is compatible with the informal labeling of transforms
defined in the HHK paper (and occasional KEM proposals).

> I understand explicit rejection to mean an explicit binary return from
> the KEM that says whether the ciphertext has been rejected or not.

Well, sure, but this begs the question "rejected by what?", and the
normal answer to that is an underlying PKE.

> This does not necessarily equal the PKE rejection, and in fact in many
> case should not equal it, as there are indeed many examples of active
> attacks if it is.
> For example, for ML-KEM the binary return value would inform whether
> re-encryption has succeeded, not whether Decrypt has succeeded.

Reencryption is important, sure, but a separate issue from implicit
rejection. In the notation of https://cr.yp.to/papers.html#tightkem,
there's first a PKE-to-PKE transform ReEnc, and then a PKE-to-KEM
transform ImplicitRejection. The ReEnc transform produces an
intermediate PKE that's "rigid", meaning that dec fails except for
ciphertexts produced by enc; the PKE provided as input to ReEnc doesn't
necessarily have this property. There are other ways to construct rigid
KEMs; ReEnc is a no-op if it's given a rigid KEM as input.

> For any attacker Bob that performs the encryption/encapsulation step
> (like in your example in the other email) the correct session key is
> known to Bob.

My example has a legitimate user Bob sending a ciphertext to Alice's
public key, and it has a separate attacker using a chosen-ciphertext
attack to rapidly decrypt Bob's ciphertext. The attacker isn't Bob. The
attacker is using computations different from the normal enc procedure
(concretely, taking vectors of the wrong weight), and doesn't know any
of the session keys until the attack is complete.

> This means Bob can distinguish between a random-looking session key
> (KEM "reject") and the correct session key (KEM "accept").
> Hence Bob learns exactly the same information, no matter if Alice
> explicitly or implicitly rejects.

In my example, the attacker is rapidly exploiting exactly the
information provided by explicit rejection, whereas implicit rejection
stops the attack.

If you'd rather see code than trace through an attack description, try
the demo from https://cr.yp.to/papers.html#ntrw; this is more complex
since it's for lattices, but the basic data flow is the same.

---D. J. Bernstein
signature.asc

Joost Renes

unread,
Mar 13, 2025, 8:19:16 AMMar 13
to d...@cr.yp.to, pqc-...@list.nist.gov
Hi Dan,

> Well, sure, but this begs the question "rejected by what?", and the normal
> answer to that is an underlying PKE.

Well not to me at least, ML-KEM being the standardized KEM by NIST the obvious
answer would rather be "rejected by re-encryption".
This also holds true for HQC which will soon join the party.

> In my example, the attacker is rapidly exploiting exactly the information
> provided by explicit rejection, whereas implicit rejection stops the attack.

I don't see how this is demonstrated.
Yes, if Bob (the encrypter) is not the attacker then it will be somewhat
harder to distinguish the "accept" and "reject" case.
Yet even then, the application will perform a sequence of actions A in the
"accept" case, and a sequence of actions B in the "reject" case.
If Bob can observe whether A = B or A != B, then he learns the same
information as he would have with implicit rejection.
Applications are not trying to hide this information (let alone with respect
to timing, power, EM, etc. side channels) so we can assume that Bob can in
fact observe this difference.
It may take a little longer, but the attack works nonetheless even if implicit
rejection is used.

KR,
Joost

-----Original Message-----
From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> On Behalf Of D. J.
Bernstein
Sent: Wednesday, March 12, 2025 5:49 PM
To: pqc-...@list.nist.gov
Subject: Re: [EXT] Re: [pqc-forum] Re: Recommendations for Key-Encapsulation
Mechanisms | Draft SP 800-227 is Available for Comment

--
You received this message because you are subscribed to the Google Groups
"pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to pqc-forum+...@list.nist.gov.
To view this discussion visit
https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20250312224927.550590.qmail%40cr.yp.to.

D. J. Bernstein

unread,
Mar 13, 2025, 8:50:42 AMMar 13
to pqc-...@list.nist.gov
'Christopher J Peikert' via pqc-forum writes:
> The deficiency is that nothing ensures that "e" (the random input to the
> one-way encryption) is "authentic." Requiring this is a basic goal of the
> re-encryption paradigm; we want to ensure that whoever produced the
> ciphertext must have "known" the encryption algorithm's random input.

As one can easily see from Niederreiter's paper or from the description
that I posted, this PKE is deterministic: i.e., the PKE randomness is
the empty string, while e (a longer vector) is the PKE plaintext.

So, even if the caller chooses e randomly (as the KEM that I described
does), it's not correct that e is "the random input to the one-way
encryption". Structurally, this description is mixing up the two PKE
inputs, whereas one has to keep them straight for analyzing reencryption
and other aspects of security. (An empty message would flunk OW-CPA!)

Since the erroneous description of e appears only inside parentheses in
the message that I'm replying to, I think I now have to spell out how
fixing this error makes that whole message collapse.

First of all, empty randomness is automatically authentic, so the claim
of nothing ensuring authenticity of the randomness is incorrect. The
"deficiency" claim is incorrect. The claim that this system doesn't
satisfy this "basic goal of the re-encryption paradigm" is incorrect.

More fundamentally, the attribution of this chosen-ciphertext attack to
a lack of reencryption is incorrect. The protections that I described
_include_ reencryption, and yet the attack works.

The claim that the broken system uses an "ad-hoc collection of
countermeasures" that implicit rejection "just happens to fix" is also
incorrect. Adding implicit rejection

* gives this system exactly the same protections as various deployed
systems such as ntruhrss701 and mceliece6960119,

* makes various HHK theorems (including the corrections from
https://cr.yp.to/papers.html#tightkem) fully applicable, and

* makes the BHHHP theorem fully applicable,

culminating in a tight proof of QROM IND-CCA2 security assuming merely
OW-CPA for the underlying PKE.

For comparison, the best proofs available for Kyber/ML-KEM, HQC, and
FrodoKEM allow QROM IND-CCA2 security to be much worse than OW-CPA for
the underlying PKE---security could drop by more than 100 bits!

To summarize, the message I'm replying to gets the picture of system
constructions, attacks, defenses, and proofs wrong at every level, and
in particular misdiagnoses the reason that this attack works.

> FO and Dent ensure an "authentic e" by requiring "e" to be a certain
> hash output. Another way is to include a hash of "e" in the
> ciphertext. Both are conspicuously missing here.

This description of what FO and Dent require is incorrect, since e is
actually the PKE plaintext. If we charitably modify the first statement
to apply to the actual randomness rather than e, the third statement
becomes incorrect: the randomness here _does_ match hash output. All
length-0 strings are equal.

For people who look more carefully at the system description, the attack
description, and theorem statements, it's not hard to figure out the
actual reasons that the theorems don't cover this system. For example:

* For https://cr.yp.to/papers.html#tightkem or the earlier HHK
implicit-rejection results, the only hypothesis not satisfied by
this system is implicit rejection.

* For the HHK explicit-rejection results, looking at the formulas
says that "spreadness" has to be large enough for the target
security level; "spreadness" is 0 for deterministic systems.

* For Dent's older results on plaintext confirmation, looking at the
formulas says that the plaintext-confirmation hash has to be long
enough for the target security level; here the hash has length 0.

These illustrate how important it is to pay attention to what theorems
actually say. Ignoring a hypothesis or ignoring a term in a formula can
be a disaster, making people believe that something is "provably secure"
when in fact it's breakable.

The survey in https://eprint.iacr.org/2019/1336 of "provable security"
exaggerations found many cases where subsequent attacks exploited the
gaps between the exaggerations and what had in fact been proven. What I
find fascinating about the central exaggeration in the current thread
("explicit rejection is as secure as implicit rejection") is that the
timeline is reversed: there are _previously published_ attacks
exploiting the gaps.

---D. J. Bernstein
signature.asc

D. J. Bernstein

unread,
Mar 13, 2025, 10:16:43 AMMar 13
to pqc-...@list.nist.gov
> > Well, sure, but this begs the question "rejected by what?", and the
> > normal answer to that is an underlying PKE.
> Well not to me at least, ML-KEM being the standardized KEM by NIST the
> obvious answer would rather be "rejected by re-encryption".

This is compatible with what I said. Again, the modular description is
as follows: first ReEnc converts PKE #1 into "rigid" PKE #2, and then
ImplicitRejection converts PKE #2 into a KEM that hides the rejections
from PKE #2.

ImplicitRejection doesn't care that the PKE it sees is internally built
with ReEnc. Sure, ML-KEM uses ReEnc, but there are other ways known to
achieve rigidity; see, e.g., https://eprint.iacr.org/2022/873 or
Section 8 of https://cr.yp.to/papers.html#goppadecoding.

> Yes, if Bob (the encrypter) is not the attacker then it will be somewhat
> harder to distinguish the "accept" and "reject" case.
> Yet even then, the application will perform a sequence of actions A in the
> "accept" case, and a sequence of actions B in the "reject" case.
> If Bob can observe whether A = B or A != B, then he learns the same
> information as he would have with implicit rejection.

I assume you instead mean "if the attacker can observe".

Of course the information hidden by implicit rejection might be leaked
by security failures in the KEM implementation. The title result of my
paper https://cr.yp.to/papers.html#ntrw is an example of this.

However, whatever risks there are of KEM implementations going wrong,
we're _definitely_ not going to be able to hide this information from
the attacker if we give the information to an endless list of KEM
applications---which is what explicit rejection does!

The security gap between implicit rejection and explicit rejection is so
extreme that it's easily visible in the standard IND-CCA2 definition,
even though that definition is blind to side-channel attacks.

Concretely, the attack example that I spelled out on list is a ROM
IND-CCA2 break exploiting explicit rejection and working efficiently
even for much larger sizes, whereas simply adding implicit rejection
gives the system a tight proof of ROM IND-CCA2 security from a
one-wayness assumption whose security level has barely been scratched by
half a century of cryptanalysis.

> Applications are not trying to hide this information (let alone with
> respect to timing, power, EM, etc. side channels) so we can assume
> that Bob can in fact observe this difference.

Again I assume that you mean the attacker rather than Bob.

Yes, if the KEM returns "reject" then the surrounding application is
likely to leak that to the attacker through timing, even if there aren't
more blatant leaks such as a "PKE decryption failed" error message.

No, this _doesn't_ break implicit rejection. With implicit rejection,
the KEM never returns "reject" to the application.

Yes, a subsequent MAC check in the application will reject the
attacker's modifications of Bob's ciphertext. No, this _does not_ break
implicit rejection: it doesn't tell the attacker which modifications are
coming from PKE rejections.

The KEM implementation _internally_ has to be careful to avoid leaking
the rejection that it sees from its internal PKE. Once the KEM hides
this information, it's simply not true that an application subsequently
checking a MAC is going to leak this information to an attacker.

> It may take a little longer, but the attack works nonetheless even if
> implicit rejection is used.

Formally, any extra time is enough to contradict the claim at issue
("As shown in [4], explicit rejection is as secure as implicit
rejection"), and in any case this claim goes beyond what "[4]" proves.

More importantly, where's the justification for the "works" claim? If
this is coming from the idea that an application's MAC check is leaking
the information exploited in the attack: no, that's not true. If you're
instead talking about easy side-channel attacks against unprotected
implementations, then that's not a valid predictor of what matters for
security, namely how easy it is to build a protected implementation.

From a systems-engineering perspective, the basic virtue of implicit
rejection is that it minimizes the amount of code that sees these
critical secrets (the "reject"-vs.-not information from the PKE). The
secrets are hidden within the KEM boundary, rather than being exposed
to the entire surrounding application.

---D. J. Bernstein
signature.asc

Joost Renes

unread,
Mar 13, 2025, 11:13:18 AMMar 13
to d...@cr.yp.to, pqc-...@list.nist.gov
smime.p7m

Tony Arcieri

unread,
Mar 13, 2025, 11:47:05 AMMar 13
to Joost Renes, d...@cr.yp.to, pqc-...@list.nist.gov
On Thu, Mar 13, 2025 at 9:13 AM Joost Renes <joost...@nxp.com> wrote:
For example, a device is trying to connect to a cloud server with static
ML-KEM key, to store some data.
A) If ML-KEM implicitly accepts, the ML-KEM handshake succeeds, and the data
is sent.
B) If ML-KEM implicitly rejects, the ML-KEM handshake fails, the data is not
sent.
A & B can be distinguished by anybody observing the communication, even
without side channels.Ahtml#ntrw is an example of this.

In the context of an AKE, implicit rejection would ultimately surface as something like a MAC verification failure due to e.g. a transcript mismatch.

An attacker can discern if such a MAC verification failure occurred because the connection will be dropped, but with implicit rejection the attacker can't distinguish if the failure was due to ML-KEM or any other mismatches in the handshake.

--
Tony Arcieri

David A. Cooper

unread,
Mar 13, 2025, 11:52:28 AMMar 13
to Joost Renes, pqc-...@list.nist.gov
Hello Joost,

Could you please explain your scenario in a little more detail? I am
also having difficulty understanding your claim that application
behavior will allow the attacker to determine whether the KEM implicitly
accepted or implicitly rejected the ciphertext.

I am assuming that the KEM is well implemented, i.e., the KEM itself
does not provide any timing, power, EM, or other side channels that
reveal whether the ciphertext was implicitly accepted or rejected. So,
the application provides a ciphertext and is provided with a shared
secret key, and the KEM provides no direct or side channel information
that would indicate whether the key was a result of implicit acceptance
or rejection.

So, what steps take place in the example below that result in the
handshake succeeding in one case and failing in the other? Are you
assuming that if the decapsulation implicitly accepts, then not only did
decapsulation succeed, but the resulting shared secret key is the same
as it would have been if the ciphertext had not been modified? Is that
why acceptance by the KEM results in the transaction succeeding? If so,
then perhaps you might argue why, in the case of ML-KEM, this is a
reasonable assumption. However, if you are making assumptions like this,
then it would be helpful for me to understand your argument if these
assumptions were explicitly stated.

Thanks,

David

On 3/13/25 8:13 AM, Joost Renes wrote:
> Hi Dan,
>
> I am not trying to argue that "explicit rejection is as secure as implicit
> rejection" as a broad statement.
> Nor am I trying to argue we should just use explicit rejection everywhere, I
> do agree it is good practice to minimize the handling of sensitive data as
> much as possible.
> I am trying to answer the question whether ML-KEM (or similar designed KEMs)
> Is broken if the binary result from re-encryption (Line 9 in Algorithm 18)
> is made public.
> If this is not the case, we should not claim this to be so.
>
>> More importantly, where's the justification for the "works" claim?
> For example, a device is trying to connect to a cloud server with static
> ML-KEM key, to store some data.
> A) If ML-KEM implicitly accepts, the ML-KEM handshake succeeds, and the data
> is sent.
> B) If ML-KEM implicitly rejects, the ML-KEM handshake fails, the data is not
> sent.
> A & B can be distinguished by anybody observing the communication, even
> without side channels.
>
> The application layer can prevent this by trying to make A look like B, but
> this is not what is typically done today, nor I think very realistic to expect
> in all scenarios.
>
> (And yes, I typed "Bob" instead of "attacker" a few times thanks for the
> correction.)
>
> KR,
> Joost

Christopher J Peikert

unread,
Mar 13, 2025, 12:00:34 PMMar 13
to pqc-...@list.nist.gov
On Thu, Mar 13, 2025 at 8:50 AM D. J. Bernstein <d...@cr.yp.to> wrote:
'Christopher J Peikert' via pqc-forum writes:
> The deficiency is that nothing ensures that "e" (the random input to the
> one-way encryption) is "authentic." Requiring this is a basic goal of the
> re-encryption paradigm; we want to ensure that whoever produced the
> ciphertext must have "known" the encryption algorithm's random input.

As one can easily see from Niederreiter's paper or from the description
that I posted, this PKE is deterministic: i.e., the PKE randomness is
the empty string, while e (a longer vector) is the PKE plaintext.

So, even if the caller chooses e randomly (as the KEM that I described
does), it's not correct that e is "the random input to the one-way
encryption".

It looks like we agree on the fact that "e" is the one and only input to N's one-way encryption that the caller chooses randomly, in the KEM you defined. (This is the only sensible reading of "the random input to [N's] one-way encryption," because N’s encryption is deterministic.)

I did not use a descriptive name for "e". We can call it the (randomly chosen) PKE "plaintext"; this has no bearing on my arguments, because they do indeed use "e" in this role -- there is no other role it could have.

Your objections are based on a nonsensical misreading that confuses "e" with the "empty randomness" of N's encryption algorithm, and blatant misrepresentations of what I clearly wrote.

First of all, empty randomness is automatically authentic, so the claim
of nothing ensuring authenticity of the randomness is incorrect.

This flagrantly misrepresents my claim.

clearly wrote that nothing ensures the authenticity of e (not of the “empty randomness” of the deterministic PKE): "The deficiency is that nothing ensures that 'e' ... is 'authentic'."

More fundamentally, the attribution of this chosen-ciphertext attack to
a lack of reencryption is incorrect.

This is another blatant misrepresentation of what I wrote.

My attribution of the attack is clearly to a lack of “authentic e,” not a lack of reencryption; see again the quote above.

> FO and Dent ensure an "authentic e" by requiring "e" to be a certain
> hash output. Another way is to include a hash of "e" in the
> ciphertext. Both are conspicuously missing here.

This description of what FO and Dent require is incorrect, since e is
actually the PKE plaintext.

My description is correct. Dent Section 6 (the relevant transform here, for a deterministic PKE) ensures that the PKE plaintext (e here, X in Dent) is "authentic" by requiring it to be a certain hash output.

See Line 6: "check that phi(Y') = X", where Y' itself is (part of) a hash output and phi is a "smoothing" function that maps to the plaintext space.

Debunking three clear (and in my view, bad-faith) misrepresentations is quite enough, so I'll stop there.

I'll conclude by reiterating:
  • The system you proposed (with explicit rejection) does not conform to any widely used transform (e.g., FO or Dent) meant for attaining IND-CCA2 security. As far as I see, you don't dispute this.
  • Specifically, it omits an important "authenticity" check on "e" that such transforms include, which is why it fails to have IND-CCA2.
  • In my opinion, this makes it uninformative for evaluating the relative merits of implicit vs explicit rejection.
  • Instead, we should evaluate this question within the context of transforms and KEMs (like ML-KEM) that attain IND-CCA2 with either kind of rejection, and consider what other properties are gained or lost by the choice of rejection method (e.g., plaintext knowledge, robustness, anonymity, tightness, etc.).

Joost Renes

unread,
Mar 13, 2025, 1:59:32 PMMar 13
to david....@nist.gov, pqc-...@list.nist.gov
smime.p7m

David A. Cooper

unread,
Mar 13, 2025, 3:34:07 PMMar 13
to Joost Renes, pqc-...@list.nist.gov
Hello Joost,

I think your assumption is reasonable, it just helps for me to
understand your argument if it is stated explicitly.

Given the way that ML-KEM works, it seems likely to me that it would be
infeasible to make any changes to the ciphertext and still have the
re-encryption step verify. If that is the case, then any modification to
the ciphertext would result in decaps outputting a rejection key.

It may also be the case that, for ML-KEM, implicit rejection provides no
security benefit over explicit rejection. However, I am not a
cryptographer, so my thoughts on this don't mean very much. I could very
easily be wrong.

My main interest was in understanding you assertion that even if the KEM
implements implicit rejection correctly, the application's behavior will
reveal whether or not the ciphertext was accepted. If I understand
correctly, your argument is that for ML-KEM (and similar KEMs) any
modification to the ciphertext will result in decaps implicitly
rejecting, and that the application will behave differently depending on
whether decaps outputs the shared secret key that would have been output
for the unmodified ciphertext or a rejection key.

Given that, I still don't understand the hypothetical application you
presented, where the handshake succeeds or fails depending on whether
decaps succeeds. Could you provide more details about the steps in the
protocol? Is it the device's or the cloud server's static ML-KEM key?
Which side decides that the handshake failed? How did it decide (MAC
failure)? Is the only attack option to modify the ciphertext while it is
in transit between the device and the cloud server (or vice versa), or
can the attacker replay an old transaction multiple times, manipulating
the ciphertext in different ways for each replay?

On 3/13/25 10:59 AM, Joost Renes wrote:
> Hi David,
>
> For a ciphertext c to be implicitly accepted, while the session key does not
> match the "correct" one, an attacker has to construct c such that
>
> c = ReEncrypt(ek, m', r'), where
> m' = Decrypt(dk, c)
>
> while not knowing m or dk, and by extension not knowing m', r', and
> guaranteeing that m' != m.
>
> These are not the types of ciphertexts that are being constructed in typical
> attacks, where some bits of a known ciphertext are flipped.
> For a random ciphertext, this property will not be true (with overwhelming
> probability).
> I cannot point you to a statement that proves it to be hard to construct such
> c, but someone else may be able to.
> Again, this comes back to the statement that FO transformations ensure only
> "authentic" ciphertexts to be accepted.
>
> KR,
> Joost
>
> -----Original Message-----
> From: David A. Cooper <david....@nist.gov>
> Sent: Thursday, March 13, 2025 10:52 AM
> To: Joost Renes <joost...@nxp.com>
> Cc: pqc-...@list.nist.gov
> Subject: Re: [EXT] Re: [pqc-forum] Re: Recommendations for Key-Encapsulation
> Mechanisms | Draft SP 800-227 is Available for Comment
>
> [You don't often get email from david....@nist.gov. Learn why this is
> important at https://aka.ms/LearnAboutSenderIdentification ]
>
> Caution: This is an external email. Please take care when clicking links or
> opening attachments. When in doubt, report the message using the 'Report
> this email' button

Joost Renes

unread,
Mar 13, 2025, 4:26:48 PMMar 13
to david....@nist.gov, pqc-...@list.nist.gov
Hi David,

The exact criteria to abort may depend on the application, but one simple
example is Figure 7 from the NIST SP 800-227 IPD.

Alice acts as the server with static key pair (ek, dk).
Bob encapsulates and obtains session key K_B and ciphertext c.
Bob sends c to Alice, but an attacker may modify c to c_modified.
Alice decapsulates c_modified with dk, obtaining K_A (implicitly
accepting/rejecting).
Alice computes t = MAC(K_A, c) and sends t to Bob.
Bob verifies t on c with respect to his session key K_B.
If the MAC fails, Bob explicitly aborts the session.

Bob will abort if and only if his MAC fails,
which happens if and only if K_A != K_B,
which happens if and only if Alice implicitly rejected.
(up to some negligible probabilities)

So, anyone observing this protocol will learn whether Alice accepted or
rejected, based on Bob's explicit behavior.
Without explicit key confirmation, Bob may simply act differently in an
observable way (like, send a message with one header or another).
Observing Alice's behavior may also suffice, since she may take action if
and only if Bob does not abort.

David A. Cooper

unread,
Mar 13, 2025, 4:59:00 PMMar 13
to Joost Renes, pqc-...@list.nist.gov
Hello Joost,

I do not believe your example works. If the attacker modifies c to
c_modified, then Alice will compute t = MAC(K_A, c_modified) and Bob
will compare t to MAC(K_B, c). So, the MAC verification will fail
whether or not K_A = K_B since c != c_modified.

We could consider a modified version of this protocol, which uses
something like t = MAC(K_A, "test"), so that the MAC value will only
change if the key changes. But, this would only provide limited help to
the attacker, since the attacker could only try modifying the text once.

Since the attacker is relying on Bob's reaction, the attacker can only
test Alice's decaps "oracle" when Bob initiates a transaction. However,
Bob will generate a new (K_B', c') for each transaction. So, if the
attacker replaces c' with c_modified2 (another modified version of c),
then K_A will definitely differ from K_B', since c_modified2 has no
relation to c'.

Perhaps you can create an application that doesn't limit the attacker in
these ways. But, is there a realistic example that supports the claim
that applications will reveal to attackers whether decaps succeeded or
implicitly failed?

D. J. Bernstein

unread,
Mar 13, 2025, 5:51:16 PMMar 13
to pqc-...@list.nist.gov
Joost Renes writes:
> I am trying to answer the question whether ML-KEM (or similar designed
> KEMs) Is broken if the binary result from re-encryption (Line 9 in
> Algorithm 18) is made public.

Hmmm. Asking about the IND-CCA2 security level specifically of ML-KEM
is simultaneously avoiding the cryptanalysis literature and the proof
literature. Maybe IND-CCA2 for ML-KEM is breakable with both rejection
types; maybe it's secure with both rejection types.

Certainly cryptanalysts are looking at lattice-based cryptography (for
example, Zhao and Ding just announced a dimension-200 record using
similar resources to what previously reached dimension only 180) but
this doesn't mean that the attack efforts are specific to ML-KEM, or,
more to the point, exploring IND-CCA2 weaknesses in ML-KEM.

As for proofs, the latest Kyber spec claims that two papers "proved that
Kyber.CCAKEM is IND-CCA2 secure in the QROM, provided that Kyber.CPAPKE
is IND-CPA secure". (It's also reasonable to hope that IND-CCA2 security
is as high as QROM IND-CCA2 security.) However, a closer look shows
giant gaps in what the spec claims. If "Adv^mlwe" is, say, 1/2^100 then
the stated theorem becomes meaningless for attackers carrying out merely
2^49 hash computations. Cryptanalysts are normally looking at the cost
of breaking one-wayness problems, not at low-probability distinguishing
problems such as this "Adv^mlwe" problem.

> For example, a device is trying to connect to a cloud server with static
> ML-KEM key, to store some data.
> A) If ML-KEM implicitly accepts, the ML-KEM handshake succeeds, and the data
> is sent.
> B) If ML-KEM implicitly rejects, the ML-KEM handshake fails, the data is not
> sent.
> A & B can be distinguished by anybody observing the communication, even
> without side channels.

Given the thicket of unresolved questions about ML-KEM's security, the
focus on ML-KEM is distracting here; I'd rather stick to the example I
picked, where the attacks and proofs are much better understood. But
I'll phrase the following data-flow comments in more generality.

Certainly the distinction in typical applications between a successful
handshake and a failed handshake is visible on the network. But it is
simply not true that this reveals the information exploited in
explicit-rejection attacks, namely the information of whether an
internal PKE ciphertext was rejected.

On the contrary, implicit rejection hides this information from the KEM
output. Whatever the application subsequently does with the session key,
such as checking a MAC under the session key and aborting a handshake if
the MAC check fails, will _not_ reveal the internal PKE rejections.

In other words, it is simply not true that this application rejection
downgrades an implicit-rejection KEM to an explicit-rejection KEM.

Let's look at a full example. I'll use "UltraKEM" as a name for
whichever KEM built from "UltraPKE"; when relevant, I'll distinguish
UltraKEM-implicit from UltraKEM-explicit. I'll mention reencryption
explicitly instead of assuming it's built into UltraPKE. I'll focus on
the "without side channels" situation, but the attacker sees network
traffic and can freely generate network traffic.

Here's a typical client-server protocol, with a prerequisite of the
server publishing a static UltraKEM key for everyone to use:

* A client has a service request (including the client and server
names, a random session identifier generated by the client, etc.),
such as asking for data storage as you mentioned (or asking for a
confirmed session to later do data storage---this won't matter for
the attack analysis).

* The client generates an UltraKEM ciphertext to the server's static
UltraKEM key.

* The client uses the UltraKEM session key for authenticated
encryption of the service request (with, say, AES-256-GCM).

* The client sends the UltraKEM ciphertext and the encrypted service
request to the server.

* The server decapsulates the UltraKEM ciphertext. If UltraKEM dec
returns "reject", the server sends back an error message "dec
failed" and ends the protocol.

* The server uses the UltraKEM session key for AES-256-GCM decryption
of the service request. If the MAC check here fails, the server
sends back an error message "mac failed" and ends the protocol.

* The server handles the service request, and uses the UltraKEM
session key for authenticated encryption of the response (again
including the session identifier etc.).

* The server sends the encrypted response back to the client.

There will then be client handling of the response, which won't matter
for attacking the handshake.

In an earlier message I reviewed, for one well-known choice of UltraPKE,
how an IND-CCA2 attack works against UltraKEM-explicit. The same attack
easily breaks the above protocol for the same choice of KEM:

* The attacker observes a legitimate client's ciphertext.

* The attacker sends modified ciphertexts to the server.

* Some of those modified ciphertexts are bad (meaning that UltraPKE
decryption fails to produce a plaintext, or that reencryption of
the plaintext doesn't produce the same ciphertext), which triggers
UltraKEM-explicit to say "reject", which triggers the application
to say "dec failed", which the attacker sees on the network.

* The other modified ciphertexts turn into "mac failed".

* The attacker works backwards from this pattern to deduce the
UltraPKE plaintext, at which point the attacker also easily
computes the UltraKEM session key, the secret service request,
and the secret response.

The literature has many other examples of chosen-ciphertext attacks
that fit the same structure; I've cited a few representative examples.
The cryptanalytic work in the papers is explaining, for each of the
cryptosystems broken in this way, which ciphertext modifications to
apply (e.g., the replacement of "eK" with "eK+fK" in my example) and how
to deduce the client's secrets from that. For what I'll say next, all
that matters is the basic data flow.

My understanding is that people aren't disputing the ability of implicit
rejection to sometimes rescue IND-CCA2---i.e., for purposes of this
discussion, we can simply postulate that UltraKEM-implicit does achieve
IND-CCA2. Instead people are claiming that implicit rejection fails to
rescue the security of the application.

For example, there's a claim that, "because the larger infrastructure
that uses the KEM aborts connections once it notices that the key is
improper", this infrastructure turns "implicit rejects into explicit
ones". Certainly the prerequisite I've just quoted is satisfied in this
protocol: the server notices that the attacker's key is improper
(because of the MAC check in AES-256-GCM), sends back "mac failed", and
stops the protocol.

The claim, then, is that this somehow downgrades UltraKEM-implicit to
UltraKEM-explicit in the context of this protocol. We saw devastating
attacks against UltraKEM-explicit; ergo, UltraKEM-implicit is broken in
this protocol.

But that's simply not true. There are at least three different ways to
see this; I'll briefly summarize two of them that I won't rely on here,
and then I'll go into detail on the third.


1. The intimidate-people-with-proofs approach: We've hypothesized that
UltraKEM-implicit achieves IND-CCA2. At this point one can pull out
references to proofs supposedly guaranteeing that protocol P, which
supposedly covers what I've stated above, achieves security property S,
which is supposedly capturing what users want handshake protocols to
achieve, if the underlying KEM achieves IND-CCA2.

This approach is error-prone (see https://eprint.iacr.org/2019/1336 for
many failure examples), subject to abuse, and typically unenlightening.
Proofs can be useful in cryptography _if_ they're handled carefully,
but, as we've seen repeatedly in this thread, they often (maybe even
usually!) aren't.


2. The put-up-or-shut-up approach: I spelled out full details of an
attack against UltraKEM-explicit for my example of UltraPKE (and cited
my software demonstrating the same type of attack for another example of
a PKE). If UltraKEM-implicit is supposedly breakable in this protocol,
where's the attack description?

This is another approach that _can_ work but whose basic problem is
again illustrated by this thread. When readers are faced with a repeated
claim, and an unanswered challenge, should readers conclude that the
claim is wrong? Or that the claim is correct and the challenge is so
silly as to not warrant a response?


3. The educational approach: We've seen how the attacker breaks
UltraKEM-explicit in this protocol, using the data flow

* bad ciphertext -> "dec failed";

* good ciphertext -> "mac failed".

Let's now change to UltraKEM-implicit and think through what this means
from the attacker's perspectiive.

As a reminder, UltraKEM-implicit differs from UltraKEM-explicit in how
it handles the case of a bad ciphertext. Specifically:

* When the server generates an UltraPKE private key, it also
generates an independent random "implicit-rejection key", say 256
bits.

* When UltraKEM-implicit sees a bad ciphertext, it returns a session
key obtained by hashing the implicit-rejection key and the
ciphertext. For comparison, UltraKEM-explicit says "reject" in
these cases.

One consequence of implicit rejection is that UltraKEM-implicit is
quiet, meaning that it never says "reject"---it always returns a session
key. This guarantees that the server won't say "dec failed".

The attack stated above against UltraKEM-explicit in this protocol,
looking at which modified ciphertexts produce "dec failed" and which
modified ciphertexts produce "mac failed", will suddenly stop working if
we plug in UltraKEM-implicit: there are no more "dec failed" results, so
all of the modified ciphertexts produce "mac failed", which isn't
telling the attacker anything.

This doesn't end the analysis. The claim I'm challenging doesn't say
that _exactly_ the same attack works; it says that one can exploit the
protocol's "explicit rejects". There are still two cases in the
protocol: the "mac failed" rejections and the successful handshakes. The
attacker can see the difference between those.

But the critical point is that these two cases _aren't_ the cases that
the attacker was looking at before, namely the bad-ciphertext case
(which turned into "dec failed") and the good-ciphertext case (which
turned into "mac failed"). Both of those cases are now funneled into
"mac failed".

Wait, what if the attacker can modify the client's ciphertext into
ciphertexts that produce successful handshakes from the server instead
of "mac failed"?

Let's think this through from the attacker's perspective ("we" in this
list is the attacker):

* We have no idea what the session keys are for the bad ciphertexts;
it's not as if we can break the hash function used for implicit
rejection. So the bad ciphertexts are practically guaranteed to
produce "mac failed".

* Our only hope, then, is to (at least sometimes) trigger successful
handshakes from the server when our modifications bump into _good_
ciphertexts.

* To trigger a successful handshake, we need to send not just a good
ciphertext but also a MAC under the corresponding session key.

* Our only hope for generating a MAC under the corresponding session
key is to know what the session key is (assuming we don't have an
AES-256-GCM attack).

* But, hmmm, we're starting with having no idea what the client's
UltraPKE plaintext is, so how are we supposed to modify the
ciphertext into a ciphertext where we _do_ know the plaintext?

* We can do whatever we want to generate a plaintext, encrypt that,
and include a corresponding MAC. Yes, this will trigger a
successful handshake. But we know this in advance, so we aren't
learning anything from the fact that it succeeds!

In my concrete example of ciphertexts eK where e is a specified-weight
vector and K is the public key, the UltraKEM-explicit attack was doing
simple things like adding f = (1,1,0,0,...,0) to e (by adding fK to the
ciphertext), which sometimes produces a vector of the same weight,
revealing interesting information about e. The question of whether e+f
is a valid plaintext isn't something the attacker knows in advance: it's
useful information about e that the attacker learns from the server's
reaction in the case of UltraKEM-explicit.

For UltraKEM-implicit, what the attacker has to do is instead to modify
eK into pK where p is a plaintext that the attacker _knows_ (to be able
to send a valid MAC). Now taking p = (1,1,0,0,...,0)+e doesn't work at
all: the attacker has no idea what e is, so the attacker has no idea
what p is, so the attacker can't come up with a corresponding MAC.

The only way for the attacker to come up with a MAC that matches p is to
know the session key (a hash of p), which forces the attacker to know p.
The attacker, knowing p and generating a MAC accordingly, then knows
that the server will complete the handshake successfully. But, again,
this means that the attacker learns nothing from the fact of completion,
and in particular learns nothing about e.

See how different these rejection cases are?

What I've written down above is actually along the lines of a protocol
proof starting from IND-CCA2---not with all details, but I think enough
to explain why explicit rejection in the application is very different
from explicit rejection in the KEM.

---D. J. Bernstein
signature.asc

Joost Renes

unread,
Mar 13, 2025, 6:18:11 PMMar 13
to david....@nist.gov, D. J. Bernstein, pqc-...@list.nist.gov
smime.p7m

D. J. Bernstein

unread,
Mar 13, 2025, 9:03:08 PMMar 13
to pqc-...@list.nist.gov
'Christopher J Peikert' via pqc-forum writes:
> This is the only sensible reading of "the random input to [N's]
> one-way encryption,"

On the contrary, there is a well-established meaning in the applicable
literature of the random input to a PKE, and this meaning exactly fits
your previous message, while your new revision is inconsistent with both
(1) the literature and (2) your previous message.

For example, 1999 FO---highlighted in your previous message---describes
encryption in its input PKE as follows: "E^asym_pk(message;coins)
indicates the asymmetric encryption of the indicated message using the
indicated coins as random bits".

The main FO construction is then the chosen-ciphertext-attack-protected
ciphertext "E^asym_pk(sigma;H(sigma,m)) || E^sym_{G(sigma)}(m)". Notice
that the second PKE input, the randomness, is replaced by hash output.

You correctly wrote in your previous message that FO requires the random
input to the PKE to be "a certain hash output". The critical error in
your message was conflating this with "e" in the specific PKE under
discussion: "nothing ensures that 'e' (the random input to the one-way
encryption) is 'authentic.' ... [random histrionics] ... FO and Dent
ensure an 'authentic e' by requiring 'e' to be a certain hash output".

See how these words say that e is the random input to the PKE, and that
FO requires e to be a certain hash output? See how the FO quotes force
the random input to the PKE to be a certain hash output?

All of this fits together---except that, no, e _isn't_ the random input
to the PKE we're talking about.

You now claim that, actually, you meant the plaintext e when you said
"random input". But this isn't reconcilable wth your claim that "FO
[requires] 'e' to be a certain hash output". It's simply not true that
FO requires the plaintext to be a hash output.

What's worse: making incorrect statements about a specific PKE, or
making incorrect statements about FO?

Side note to any students in the audience: In the original McEliece
cryptosystem, there's a message m and randomness e. If you know this and
see the words "Niederreiter's compressed version of the original
McEliece system", do you find yourself guessing that Niederreiter's
system also has a message m and randomness e? There's nothing wrong with
guessing, but always remember that part of the basic job of a scientist
is to avoid misrepresenting guesses as facts. There are many ways to see
that this particular guess is wrong: for example, take the time to read
the description of the compressed system.

> because N’s encryption is deterministic.

This is claiming that it isn't "sensible" to apply PKE definitions that
have a random input to the case of deterministic PKEs.

In fact, this makes perfect sense mathematically _and_ is what happens
in, e.g., the HHK paper (https://eprint.iacr.org/2017/604). That paper's
general definition of a PKE specifically refers to the randomness used
in encryption:

The encryption algorithm Enc, on input pk and a message m ∈ M,
outputs an encryption c <- Enc(pk, m) of m under the public key pk.
If necessary, we make the used randomness of encryption explicit by
writing c := Enc(pk, m; r), where r <$- R and R is the randomness
space.

The "T" transform in HHK, much like FO, replaces the random input to the
PKE with hash output, so that reencryption finds the same ciphertext.

HHK specifically considers the case of Enc being deterministic (see
Theorem 3.6, for example, and the paper's notes on what T accomplishes).
That's the case of the randomness space R having size 1; i.e., the case
of r having 0 bits of randomness.

Of course, if Enc is deterministic, then replacing r with hash output is
a no-op, since all length-0 strings are equal. This makes reencryption
particularly easy, and turns out to have further security benefits; see
https://cr.yp.to/papers.html#footloose.

> - The system you proposed (with explicit rejection) does not conform to
> any widely used transform (e.g., FO or Dent) meant for attaining IND-CCA2
> security.

Simply switching from explicit rejection to implicit rejection not only
stops the attack but brings this system within HHK, BHHHP, and common
practice in deployed systems.

The problem illustrated by this attack isn't with what has actually been
proven. The problem is with the exaggerations, such as "As shown in [4],
explicit rejection is as secure as implicit rejection".

[ claiming to reiterate: ]
> - Specifically, it omits an important "authenticity" check on "e" that
> such transforms include, which is why it fails to have IND-CCA2.

Actually, this isn't a repetition of your previous misdiagnosis; it's a
new misdiagnosis.

Your previous message was claiming that FO requires the random input to
the PKE to be a hash output, and that this particular PKE doesn't do
that. The first part of this claim is correct; the second is incorrect.

Your new message shifts to labeling e as plaintext, so you're claiming
that FO requires the plaintext input to the PKE to be a hash output,
and that this particular PKE doesn't do that. The second part of this
claim is correct; the first is incorrect.

---D. J. Bernstein
signature.asc

D. J. Bernstein

unread,
Mar 14, 2025, 2:16:21 PMMar 14
to pqc-...@list.nist.gov
Joost Renes writes:
> Note that the attacker can definitely initiate the session themselves,

Certainly. As I mentioned before, some chosen-ciphertext attacks in the
literature are extracting Alice's private key, and some are extracting
Bob's plaintext. You're talking about the first type; the bit-flipping
attack that I explained in detail is of the second type. (There are
other types; see below.)

The survey of chosen-ciphertext attacks and defenses in Section 3 of
https://cr.yp.to/papers.html#ntrw starts by explaining how defenses 0
(hashing the plaintext), 1 (rigidity), and 2 (no decryption failures)
protect the private key. Protecting plaintexts is a more complicated
story. The survey continues with five further defenses, one of those
being implicit rejection, the defense whose value I've been illustrating
in this thread by giving examples of attacks that it blocks.

Why is protecting plaintexts more complicated?

Well, notice first that any attack that extracts the private key can
also extract plaintexts---the attacker simply follows the regular dec
algorithm. So protecting plaintexts can't be _less_ complicated.

Clearly the converse can't always be true. An attack that extracts
plaintexts can't always extract private keys. Specifically, consider
obtaining a seed from an RNG, hashing it with SHAKE to obtain all the
randomness you need for key generation, renaming the RNG seed as the
"private key", and then saying "ha ha you can't invert SHAKE so you'll
never extract my private key"! Or, without relying on the strength of
SHAKE, consider simply expanding the "private key" to include 256 unused
bits---then the attacker will never recover the "private key"!

This "clearly" paragraph is formalism run amok: it begs the question of
whether an attacker can recover what cryptanalysts call an "equivalent
private key", something that's as usable for the attacker as "the"
private key (while not necessarily having the same format or the same
usage algorithms). Theorists tend to give up at this point and resort to
indistinguishability definitions, but those definitions don't match what
cryptanalysts are actually studying.

What I find more convincing is the chosen-ciphertext-attack literature
often failing to extract (equivalent) private keys while still being
able to extract plaintexts. For example,

https://cypherpunks.ca/~iang/pubs/paper-reaction-attacks.pdf

tries to extract private keys, and succeeds for a lattice system, but
for a code-based system it extracts only one plaintext at a time ("we
must repeat the attack for each ciphertext we wish to decrypt").

I don't mean to suggest that designers always succeed in protecting
private keys. On the contrary, we've seen some proposals cutting corners
on defense 2 and being broken by decryption-failure attacks. But we
don't have to use those proposals; we can simply insist on eliminating
decryption failures.

To summarize: You're correct in thinking that implicit rejection isn't
the defense that matters for protecting private keys---but that's the
relatively easy part of designing a KEM to stop chosen-ciphertext
attacks. Implicit rejection is instead addressing the more challenging
problem of protecting the plaintext against chosen-ciphertext attacks.

Happily, implicit rejection---together with the other best practices
used in the KEM that I described in detail---also enables the strongest
proofs available of QROM IND-CCA2 security, the proofs that manage to
simultaneously minimize security assumptions on the underlying PKE (the
PKE merely has to be one-way) while maximizing tightness. So there's no
incompatibility between (1) what cryptanalysts recommend and (2) what
one sees from a careful look at the theorems. This changes when the word
"careful" is removed; that's what triggered this thread

---D. J. Bernstein
signature.asc
Reply all
Reply to author
Forward
0 new messages