[Pqc-forum] post-quantum benchmarking and RNGs

129 views
Skip to first unread message

D. J. Bernstein

unread,
Jul 23, 2017, 11:23:18 AM7/23/17
to pqc-...@list.nist.gov
I've put up two blog posts related to the post-quantum project. The
first blog post announces that the SUPERCOP benchmarking framework is
now ready for post-quantum submissions, including KEMs:

https://blog.cr.yp.to/20170719-pqbench.html

This includes several recommendations for NIST. The second blog post is
a deeper dive into random-number generation:

https://blog.cr.yp.to/20170723-random.html

The main focus of this blog post is "fast-key-erasure RNGs". Compared to
fast-key-erasure RNGs, NIST's AES-CTR-DRBG is more complicated, harder
to analyze, more likely to be a speed problem, and more dangerous, as
the blog post explains. Fast-key-erasure RNGs are now used in SUPERCOP.

---Dan

Danilo Gligoroski

unread,
Jul 25, 2017, 5:51:43 AM7/25/17
to pqc-...@list.nist.gov
Hi all,


I would like to discuss two recommendations that Dan has put on his blog:

1. "Recommendation to NIST: Adjust the API to clearly allow submissions to call GMP."

I really like and appreciate this recommendation. I am implementing a KEM scheme by using generated C code from MPFQ software library for manipulation of finite fields. The generated mpfq_2_256.c and mpfq_2_256.h, need GMP installed (the C program that uses MPFQ routines must invoke #include <gmp.h> in the beginning). Putting the whole gmp installation package in our NIST submission package would be an unnecessary complication (and probably a source of many installation bugs and incompatibilities). If NIST would allow the submissions to call GMP without a need to install themselves the GMP library, that would be very practical and helpful.


2. "Recommendation to NIST: Make an explicit list of automatic conversions that NIST is guaranteed to apply."

Making an explicit list of KEM <-> PKE conversions might be useful but for the competition (standardization) process, but is not important for us as proposers of a KEM scheme. In this moment, being focused on our scheme, we simply do not care how the conversion will be done. I guess that it is a similar situation with the developers of PKE schemes but maybe I am wrong about that, and if so, I would like to hear some arguments why we should care about the NIST KEM <-> PKE conversion scheme.

Best regards,
Danilo!

> _______________________________________________
> pqc-forum mailing list
> pqc-...@list.nist.gov
> (_internal_name)s
>

D. J. Bernstein

unread,
Jul 25, 2017, 9:02:21 AM7/25/17
to pqc-...@list.nist.gov
Danilo Gligoroski writes:
> I would like to hear some arguments why we should care about the NIST
> KEM <-> PKE conversion scheme.

I think KEMs will be a central focus. It's clear that the session key
transmitted by a KEM can be used to authenticate and encrypt any number
of messages. KEM submitters shouldn't have to worry about specifying
details of how that's done.

There will be some PKEs too---either because of the weight of history
making submitters think that PKEs are what's really wanted, or because
they see serious efficiency benefits from trying to pack more than 256
message bits into a ciphertext. Of course a PKE can also be used as a
KEM: send a random session key as a message.

Presumably some PKE submissions will be worse than the results of
converting KEM submissions into PKEs. Here's the failure mode I'm
worried about: if

* submitters don't do the KEM -> PKE conversion and
* NIST doesn't do the KEM -> PKE conversion

then the PKE submitters can object to any attempt at comparison.
Similarly, presumably some KEM submissions will be worse than the
results of converting PKE submissions into KEMs, but if

* submitters don't do the PKE -> KEM conversion and
* NIST doesn't do the PKE -> KEM conversion

then the KEM submitters can object to any attempt at comparison.

What I'm recommending is for NIST to now specify conversions that NIST
is going to automatically apply. Then submitters don't have to worry
about these failure modes _and_ don't have to worry about the details,
unless they want to try to do better somehow.

Of course ECDH can also be used as a KEM and as a PKE. I don't know
whether isogeny-based DH submissions will have enough security features
to make the same conversions work.

---Dan

Moody, Dustin (Fed)

unread,
Jul 25, 2017, 11:11:06 AM7/25/17
to pqc-...@list.nist.gov
Regarding conversions, the following is posted on our FAQ (and has been for a while):

Q: What are the "standard conversion techniques" NIST will use to convert between public-key encryption schemes and KEMs?

A: To convert a public key encryption function to a KEM, NIST will construct the encapsulate function by generating a random key and encrypting it. The key generation and decapsulation functions of the KEM will be the same as the key generation and decryption functions of the original public key encryption scheme. To convert a KEM to a public key encryption scheme, NIST will construct the encryption function, by appending to the KEM ciphertext, an AES-GCM ciphertext of the plaintext message, with a randomly generated IV. The AES key will be the symmetric key output by the encapsulate function. (The key generation function will be identical to that for the original KEM, and the decryption function will be constructed by decapsulation followed by AES decryption.)

---Dan

Yongge Wang

unread,
Jul 25, 2017, 12:22:06 PM7/25/17
to pqc-...@list.nist.gov
random generator design is a very complicated issue and I think it is
beyond
this PQC standard process. It could be a separate process. That is why I
proposed
that NIST provide a uniform DRBG API so that one can use that API to get
deterministic
randomness if the entropy input is the same. Otherwise, it is a distraction
to most
people who plan to submit their packages. In particular, there are many
implementation
based attacks on DRBG and some submitters may not be expert in these
implementation domains.
This will let people focus on their "post-quantum" part.

For the NIST DRBG, will the concerns in https://blog.cr.yp.to/
20170723-random.html
be resolved if each time one use overwrite the "entropy" or "key" string
immediately
after one finish the random generator? that is, instead of using
"free(entropy)" one uses "memcpy(entropy, RANDOM,
entropyLen*sizeof(unsigned char))"?
At least this, will be equivalent to the fast key eraser proposed by Dan?
DOD requires 7 times overwrite of the HDD to completely eraser the content..
for RAM, will one time overwrite be sufficient?

Yongge

D. J. Bernstein

unread,
Jul 25, 2017, 2:11:10 PM7/25/17
to pqc-...@list.nist.gov
Moody, Dustin (Fed) writes:
> Regarding conversions, the following is posted on our FAQ (and has
> been for a while):
> Q: What are the "standard conversion techniques" NIST will use to
> convert between public-key encryption schemes and KEMs?

Sorry for not spotting this earlier! This looks detailed enough to take
care of my main KEM->PKE and PKE->KEM concerns.

The only thing bothering me is the IV being random. It's not that it
causes huge problems---

* it's only 16 extra bytes to send;
* it's only a little bit of extra code;
* I don't think that there are any security issues hidden in the
extra separation from the core of what's analyzed in the literature

---but the whole argument _for_ it sounds like NIST is quietly shifting
the top security goalposts to some unclear position _beyond_ AES-256.

There are 256-bit secret values all over the place in high-security
long-term designs. AES-256 keys are simply the most obvious example.
Each of these internal values, if used 2^64 times, has to be presumed to
allow "attacks" that take time "only" 2^192. Maybe we'll be able to
"fix" this by adding one complication after another to higher-level
protocol details and finding proofs that the complications stop the
attacks---but history shows that this is a lot of work, often done
incorrectly, and that adding complications creates _real_ security
risks.

Perlner, Ray (Fed) writes:
> For example, if the key generation algorithm is seeded with only 256
> bits of entropy, then an attacker can recover 1 out of 2^64 private
> keys in "only" 2^192 attempts

That's also a successful attack on "key search on a block cipher with a
256-bit key (e.g. AES 256)", which is the top target security level
stated in the call.

I'm assuming, of course, that in evaluating security levels we're taking
into account "resistance to multi-key attacks", which is stated as a
"desirable property" in the call. The standard notion of multi-key
attacks breaks 2^64 AES-256 keys in "only" 2^192 attempts.

If NIST is actually aiming for something beyond the AES-256 security
level, then I'd like to see a clear statement of what the target is and
the rationale for this target. If not, then what's the motivation for a
random IV for AES-256-GCM?

There's no actual threat of attackers guessing 256-bit secrets, even
with multi-target attacks.

> This is the same reason TLS 1.3 uses a pseudorandom IV for the first
> message in its GCM ciphersuites.

You're referring, indirectly, to security analysis of AES-128, which is
quantitatively in a completely different universe from AES-256.

---Dan

Perlner, Ray (Fed)

unread,
Jul 25, 2017, 3:20:48 PM7/25/17
to pqc-...@list.nist.gov
DJB: but the whole argument _for_ it sounds like NIST is quietly shifting the top security goalposts to some unclear position _beyond_ AES-256.

Hopefully I can clarify what the original intent of the security strength levels was. The point of the 5 security strength categories, given in section 4.A.5 of the call for proposals, was to classify the computational complexity of attacks in a way that didn't assume a particular model of computation. The intent of the categories was never to specify the attacker's goals or what data would be available to the attacker. (This information is instead given by the security definitions in sections 4.A.2, 4.A.3 and 4.A.4.) To further clarify, security strengths 1,3, and 5 are intended to be defined by the computational complexity of a key recovery attack against a small number of known plaintext-ciphertext pairs encrypted under a single key. (A large number of protocols implementing AES-256, including TLS, also protect against multi-key attacks, but this is not crucial to our intended definitions.)

As it happens, multi-key attacks are not considered in the security definitions for 4.A.2, 4.A.3, and 4.A.4, so protecting against them is optional. However, submitters may wish to consider stronger security definitions, and in section 4.A.6, the call for proposals suggests additional desirable security properties, one of which is resistance to multi-key attacks. I don't think any of this represents a shifting of goalposts from what was originally in the call for proposals.

I don't dispute that attacks involving more than 2^192 work can justly be assumed impractical. That said, if we are ignoring all such attacks, AES 192 with a random IV would achieve this security more cheaply than AES 256 with no IV, at least for reasonably large plaintexts. In the end, we decided, when making one-size-fits-all suggestions for symmetric primitives, to target the maximum amount of security which was deemed at all desirable by the CFP, since as you said, the additional cost is still pretty small. We could have given a wider range of suggestions targeting different security strengths and security definitions, but we felt this would make things more complicated. Submitters wishing to make more granular tradeoffs between performance and targeted security are of course free to do so, although we still encourage them to use some sort of NIST approved primitive.

-----Original Message-----
From: pqc-forum-bounces at nist.gov [mailto:pqc-forum-bounces at nist.gov] On Behalf Of D. J. Bernstein
Sent: Tuesday, July 25, 2017 2:11 PM
To: pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [Pqc-forum] post-quantum benchmarking and RNGs

---Dan

D. J. Bernstein

unread,
Jul 25, 2017, 5:23:51 PM7/25/17
to pqc-...@list.nist.gov
So NIST's top security target for a post-quantum submission is to have

* the _multi-user_ security of that submission matching
* the _single-user_ security of AES-256, which is beyond
* the _multi-user_ security of AES-256?

Is there a rationale for this bizarre-sounding choice?

I realize that NIST is terrified of the _appearance_ of a security loss.
This is illustrated by NIST's crackpot 2^512 preimage requirement for
SHA-3 (can't have anything sound worse than SHA-2!), which led in a
fairly predictable way to things like this:

https://www.imperialviolet.org/2017/05/31/skipsha3.html

I realize that the same fear is driving NIST to ask for public-key
submissions to match---as a top target---the security level of AES-256.
This means overemphasizing meaningless levels of pre-quantum security;
this is likely to slow down deployment of post-quantum crypto.

But, even accepting all of these unfortunate facts, I don't understand
what NIST sees as the problem with a submission that's

* as secure as AES-256 in a realistic multi-user attack scenario and
* as secure as AES-256 in an oversimplified single-user scenario.

There's no loss of security. There's no _appearance_ of a loss of
security. And yet NIST seems to be setting its top target _higher_ than
this, expressing a preference for a submission whose _multi-user_
security matches the _single-user_ security of AES.

Some submissions will have less degradation in a realistic multi-user
attack scenario than AES does. I thought that NIST was recognizing this
in saying that "resistance to multi-key attacks" is "desirable". This
makes those submissions safe to use with _smaller_ parameters---and
those submissions should be able to argue this.

But it now sounds like NIST will be saying that, for the top security
target, these submissions _won't_ be allowed to use smaller parameters.
Meanwhile submissions that _do_ degrade the same way as AES will need
_larger_ parameters or risk being kicked out in favor of more
"desirable" submissions that provide no actual security benefits.

> AES 192 with a random IV would achieve this security more cheaply than
> AES 256 with no IV, at least for reasonably large plaintexts.

Are you _sure_ that it would achieve this security?

There seem to be all sorts of bugs in GCM-related security proofs. Where
can I find a verified analysis of the quantitative security impact of a
random IV in GCM? How about other protocols: e.g., the random "V" inside
AES-CTR-DRBG?

(This comparison is also warped by AES tying speed to key size. Starting
from 12-round AES-192, switching to 12 rounds of AES-256 is cheaper,
more convincing, and easier to analyze than randomizing the IV.)

Anyway, I don't see post-quantum submitters worrying about the minor
speed differences between AES-256 and AES-192 and trying to cut things
close. What I see is people using 256-bit secrets all over the place---
because there are all sorts of convenient 256-bit primitives sitting
around, they seem fast enough, and we don't need more than this for
real-world security for the foreseeable future.

Some of these 256-bit secrets are going to be easier to attack in batch
than others. Do we really want to be spending time on these meaningless
"attacks", and attempts to "fix" the systems? This is entirely an
artifact of NIST setting a beyond-AES-256 security goal.

The big issue here isn't the cost for the computer. The big issue is
that security-analysis time is limited. If NIST encourages the community
to spend time counting 2^200 angels on the head of a pin, then that's
precious human resources taken away from figuring out which post-quantum
systems we should be deploying before the apocalypse.

The right thing to do is say that 256 bits are big enough---whether or
not they can be attacked in batch---and that anyone who wants more than
this should wait for a subsequent round of standardization.

---Dan

Perlner, Ray (Fed)

unread,
Jul 26, 2017, 1:49:50 PM7/26/17
to pqc-...@list.nist.gov
Including some sort of random salt in a GCM IV is generally good practice. Yes, it doesn't really matter for AES-256, since a 256 bit key is way larger than is plausibly needed for security. Nonetheless, once we standardize a use of symmetric cryptography (including a new public key standard) people are going to expect to be able to freely substitute Approved algorithms unless we make a big point saying you can't. We don't want to be in a position where it is safe to use AES-128 for common applications like TLS and IPSec (that use 96 and 32 bits of randomness in their GCM IVs respectively), but not safe to use AES-128 in our new public key standard. Whatever academic cryptographers expect, mere mortals tend to expect that when we say something has 128, 192, or 256 bits of security, then the attacker has to do 2^128, 2^192, and 2^256 AES operations worth of work respectively, in order to compromise the crypto in any useful way. We should be consistent in advocating modes of o!
peration that meet these expectations insofar as it is practical.

We are therefore sticking with our suggestion (not requirement) to use AES-256 GCM with a random IV in cases where a single message authenticated encryption mode is needed, and the submitter doesn't want to think too hard about what symmetric primitive is best. (We'd advocate simply incrementing the counter by something you're sure exceeds the maximum message length in cases where additional messages are encrypted under the same key.)

Note the intent here is not to focus submitters attention on meeting the highest possible security level, it is simply to offer a one-size-fits all choice which won't hamper the submitter's ability to meet whatever security level they think is reasonable to target. We have said in both the CFP and the FAQ that we care more about security strengths 2 and 3 than security strengths 4 and 5.

>From the CFP:
"NIST recommends that submitters primarily focus on parameters meeting the requirements for categories 1, 2 and/or 3, since these are likely to provide sufficient security for the foreseeable future. To hedge against future breakthroughs in cryptanalysis or computing technology, NIST also recommends that submitters provide at least one parameter set that provides a substantially higher level of security, above category 3. Submitters can try to meet the requirements of categories 4 or 5, or they can specify some other level of security that demonstrates the ability of their cryptosystem to scale up beyond category 3."

>From the FAQ:
"It is, however, NIST's present belief that all five of the security strength categories provide sufficient security to allow for standardization. More precisely, NIST would describe security strengths 4 and 5 as 'likely excessive,' 2 and 3 as 'probably secure for the foreseeable future,' and security strength 1 as 'likely secure for the foreseeable future, unless quantum computers improve faster than is anticipated.'"


-----Original Message-----
From: pqc-forum-bounces at nist.gov [mailto:pqc-forum-bounces at nist.gov] On Behalf Of D. J. Bernstein
Sent: Tuesday, July 25, 2017 5:24 PM
To: pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [Pqc-forum] post-quantum benchmarking and RNGs

https://www.imperialviolet.org/2017/05/31/skipsha3.html

---Dan

D. J. Bernstein

unread,
Jul 26, 2017, 11:17:23 PM7/26/17
to pqc-...@list.nist.gov
Kyber sounds like it's producing a 256-bit session key---and there are
all sorts of other design decisions flowing from this. It is therefore
breakable as a multi-user KEM (with standard definitions---see, e.g.,
Appendix B of https://eprint.iacr.org/2016/360.pdf) in "only" 2^256/U
guesses, where U is the number of users, say 2^48.

Suppose another submission produces larger session keys, say 320 bits.
Will this submission be allowed to claim that it's better than Kyber
because it achieves the "desirable" property of top 2^256 multi-user
security for 2^48 (or even more) users?

If so, this is a distraction from settling on a sensible post-quantum
standard. Even if there is some rationale for trying to match AES-256,
this is going _beyond_ AES-256, and I see no rationale for that.

Regarding the supposed analogy between AES-128 and AES-256: The reality
is that there is a gigantic gap between

(1) finding someone's AES-128 key in 2^80 guesses and
(2) finding someone's AES-256 key in "only" 2^208 guesses.

#1 is a real problem, and something needs to be done about it. #2 is so
wildly removed from being a real problem that complaining about it is a
terrible waste of time. I totally understand paying attention to future
risks---I've been advocating research into post-quantum cryptography for
14 years---but 2^208 is _far_ beyond any reasonable planning horizon.

Perlner, Ray (Fed) writes:
> mere mortals tend to expect that when we say something has 128, 192,
> or 256 bits of security, then the attacker has to do 2^128, 2^192, and
> 2^256 AES operations worth of work respectively

If so, then why does NIST claim that AES-128 has 128 bits of security?
The simple fact is that typical AES-128-based protocols leak secret keys
to real-world attackers with far less work than 2^128 AES computations.

There's a long history of half-hearted attempts to tweak the protocols
to try to obtain more security with AES-128. The security claims are
usually not quantified and are rarely backed by any security analysis.
In many cases it's easy to see that the claims are wrong.

(Does AES-128-CTR-DRBG claim a security benefit from its random 128-bit
IVs? How much security benefit? Where is this analyzed?)

The easy, robust fix is to discard AES-128 and upgrade to 256-bit keys.
At that point the problem is gone, the motivation for "fixing" the
protocols is gone, and we can focus on more challenging problems---which
also means not wasting time on things like random IVs, 320-bit seeds,
and 320-bit session keys.

> Note the intent here is not to focus submitters attention on meeting
> the highest possible security level

NIST's crackpot 2^512 security level did tremendous damage to SHA-3,
even though there were several lower security levels. Most people don't
like to see a mix of different solutions for different security levels.

---Dan

Mike Hamburg

unread,
Jul 27, 2017, 3:38:42 AM7/27/17
to pqc-...@list.nist.gov
+1 to this, except that Kyber Paranoid doesn?t claim 256-bit classical security anyway.

We should aim to increase security metrics that meaningfully correspond
to the risk of attack. Going from 2^192 to 2^256 estimated classical work
is meaningful as a hedge against improvements in cryptanalysis. The same
could be said for adding a margin against non-generic multi-user attacks that
might be amenable to improvement. But using a random IV or 320-bit key
to prevent the generic 2^256/U attack is unlikely to reduce the risk of any
realistic attack.

? Mike

Perlner, Ray (Fed)

unread,
Jul 27, 2017, 4:56:50 PM7/27/17
to pqc-...@list.nist.gov
Dan,

The article you linked, https://eprint.iacr.org/2016/360.pdf, does not appear to support your claim that a KEM producing a 256-bit shared secret has at most 2^256/U multi-user security. The article points out that such multi-key degradation will occur if used with a DEM that is not randomized, which is precisely why we suggest the use of a DEM that is randomized. The security of AES GCM as suggested by us, and as used in TLS (with a pseudorandom 96 bit nonce) is analyzed in https://eprint.iacr.org/2017/435.pdf . It is found to have no multi-key degradation in either integrity or confidentiality. We therefore see no reason to expect that Kyber's natural shared secret length of 256 bits should hamper its ability to attain multi-user security up to security strength 5. (Of course the designers of Kyber may speak for themselves in this regard, and they are in no way required to target such a high security strength in the first place.)

Assuming, however, that some potential submitter is unable to attain security strength 5 with or without multi-user security guarantees, then yes, other submitters are allowed to claim superiority in this respect. Submitters are allowed to claim superiority over other submitters for any reason they like, and we will evaluate all such claims based on our own judgement. (At this point it is probably worth reminding everyone that we do not consider our postquantum standardization process to be a competition.) That said, we have previously said we will prioritize the ability to target the middle security levels over the very highest, and we still stand by that.

Finally, you seem to be advocating that NIST respond to the possibility of multikey attacks by withdrawing AES128, rather than by advocating for modes of operation that have built-in multi-key security. Given that
1) AES-128 is the most widely used block cipher at present, and it has never come even close to being practically attacked based on an insufficiently large key size.
2) Most widely-used high-volume protocols, where multi-key security is actually a concern (e.g. TLS and IPSec) already have built-in protections against multi-key attacks.
It seems premature to pull AES128. Therefore, we stand by our suggestion that if you're looking for a good, generic authenticated encryption mode, you should use AES-GCM pretty much the way TLS 1.3 does it. And since, in this context, we're trying to give a suggestion that covers the full range of plausible security levels, we are suggesting the use of AES256 in this mode.

-Ray

-----Original Message-----
From: pqc-forum-bounces at nist.gov [mailto:pqc-forum-bounces at nist.gov] On Behalf Of D. J. Bernstein
Sent: Wednesday, July 26, 2017 11:17 PM
To: pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [Pqc-forum] post-quantum benchmarking and RNGs

---Dan

D. J. Bernstein

unread,
Jul 27, 2017, 9:45:44 PM7/27/17
to pqc-...@list.nist.gov
Perlner, Ray (Fed) writes:
> The security of AES GCM as suggested by us, and as used in TLS (with a
> pseudorandom 96 bit nonce) is analyzed in
> https://eprint.iacr.org/2017/435.pdf . It is found to have no
> multi-key degradation in either integrity or confidentiality.

No. The paper doesn't say what you're claiming, and it should be obvious
that nothing along the lines you're claiming could possibly be proven.

Assume (reasonably) that all plaintexts have a standard pattern. Imagine
that (surprisingly) AES-128 actually has 2^80 incredibly weak keys that,
in counter mode, turn this pattern into a quickly detectable ciphertext
pattern. This is a fast probability-1/2^48 attack in the single-key
model, and a fast probability-U/2^48 attack against U users of AES-GCM,
which is the maximum conceivable multi-user degradation factor, quite
different from "no multi-key degradation".

Given that the original AES-GCM security "proof" turned out to be wrong,
given that there are many newer errors along similar lines (consider the
AEZ attack or https://eprint.iacr.org/2017/708.pdf), and given the broad
context of constant errors in proving and applying security theorems, I
think it's clear that _much_ more care is warranted here.

> AES-128 is the most widely used block cipher at present, and it
> has never come even close to being practically attacked based on an
> insufficiently large key size.

An attacker carrying out 2^90 AES-128 computations merely needs to
locate a few hundred billion ciphertext blocks from targets that have
applied their AES-128 keys to a particular input block. It's clear that

* AES-128 is being used in a wide range of different ways, many of
which include predictable input blocks, and

* some real-world attackers are already close to 2^90 and might even
have reached it already.

I can't imagine how NIST would plausibly dispute either part of this. Is
the argument supposed to be that NIST has some comprehensive overview of
AES applications, knows how often predictable input blocks are used, and
is confident that those don't add up to a few hundred billion blocks?
Sorry, not plausible.

The bottom line is that 128-bit keys are dangerous. Sure, this isn't the
biggest problem that we're facing in cryptography, but it's a noticeable
problem already, and it's easy to fix: simply upgrade to 256-bit keys.

More to the point:

* NIST's _failure_ to upgrade to 256-bit keys is being used as the
rationale for one poorly analyzed complication after another in
this post-quantum project (communicate a random IV, use seeds
longer than keys, etc.), and then

* these complications are being unnecessarily inflicted upon people
who _are_ sensibly using 256-bit keys.

Most, perhaps all, post-quantum submitters would be perfectly happy
taking 256-bit keys and then focusing on the serious public-key
challenges that this project is supposed to be about. But, no, it looks
like NIST is setting the top security target _beyond_ this, and so the
submitters will be pressured to jump through extra randomization hoops.

> The article you linked, https://eprint.iacr.org/2016/360.pdf, does not
> appear to support your claim that a KEM producing a 256-bit shared
> secret has at most 2^256/U multi-user security.

I said it was, with cost 2^256/U, breakable as a multi-user KEM _with
standard definitions_. Formally:

* There's a standard family of KEM->PKE conversions (with a DEM as a
parameter) initiated in the Cramer--Shoup paper, studied in many
subsequent papers, and used in standards from ISO etc.

* There's a standard PKE multi-user security definition in
http://cseweb.ucsd.edu/~mihir/papers/musu.pdf.

* Under the standard multi-user security definition, each PKE
obtained in the standard way from this KEM is broken for U users
in "only" 2^256/U session-key guesses.

NIST seems to think that the KEM actually has security 2^256 (assuming
the public-key part isn't broken). What exactly is the multi-user KEM
security definition that NIST is using here?

If the 256-bit session keys were replaced by 128-bit session keys then
the same attack would take only 2^128/U session-key guesses, and I think
we can all agree that this is an unacceptably low security level when U
is large. There seems to be a dispute regarding who to blame for this,
i.e., where to apply a fix:

(1) I blame the KEM for selecting parameters that produce 128-bit
session keys. The fix is to upgrade to 256-bit session keys.

This fix is easy to analyze using standard definitions in the
literature, and it convincingly eliminates the problem.

(2) NIST blames the users of the KEM (pretty much the entire KEM-DEM
literature) for doing something deterministic. Apparently NIST
advocates Zaverucha's maybe-fix, namely to tell all the users
that they have to stick random numbers into random places: e.g.,
choosing a random 96-bit nonce for AES-128-GCM, or was that a
random 32-bit nonce?

Compared to fix #1, this is a much more drastic deviation from
the literature, and is much harder to analyze. It also seems to
provide a speed-security tradeoff that's significantly worse
overall than fix #1.

Beyond this, NIST seems to think that people who have applied fix #1
should be encouraged to _also_ provide maybe-fix #2. NIST's argument, if
I understand it correctly, is that there's some sort of marketing
failure (something about what "mere mortals tend to expect") in a system
that provides "only" 2^192 security for 2^64 users. But this raises many
questions, such as the following:

* If "only" 2^192 security for 2^64 users is a problem, then why does
NIST continue to recommend standards that clearly have much lower
security, such as AES-128-GCM, for use through "2031 and beyond"?

* If GCM is supposed to randomize its IV, then where can I find the
GCM standard recommending against the "deterministic construction"
in Section 8.2.1 of that standard? I see a _requirement_ to _use_
the deterministic construction for short nonces. I also see a
data-length limit that's lifted _if_ the user chooses 96-bit nonces
(which are recommended anyway) via the deterministic construction;
this makes deterministic 96-bit nonces sound safest.

* Does NIST have an inventory of batch attacks against existing NIST
standards? Does NIST have a schedule for changing its standards to
eliminate these attacks? Where can I find this documented?

I see very little evidence that NIST is systematically "advocating for
modes of operation that have built-in multi-key security". This looks
like a new ad-hoc complication for the post-quantum project.

It's much simpler to (1) use 256-bit secret keys and (2) stop worrying
about batch attacks. I don't see a security argument against this simple
approach---especially in the context of a public-key project.

> Submitters are allowed to claim superiority over other submitters for
> any reason they like, and we will evaluate all such claims based on
> our own judgement.

There's clearly a special status for security targets that are set by
NIST in advance. It's particularly important for these targets to be
clear and justified.

---Dan

Moody, Dustin (Fed)

unread,
Jul 28, 2017, 10:33:25 AM7/28/17
to pqc-...@list.nist.gov
We appreciate the discussion. NIST will continue to keep to its original suggestions. Please note these are NOT requirements.


While NIST will permit submitters to choose any NIST approved cryptographic algorithm for their submission if they feel it is necessary to achieve the desired security and performance, a number of potential submitters have asked us to offer default options for common symmetric cryptographic primitives. As such, here are our suggestions:

1. Hash functions: SHA512 is likely sufficient to meet the requirements of any of our five security strength categories and gives good performance in software, especially for 64 bit architectures. Submitters seeking a variable length output, good performance in hardware, or multiple input strings, may instead prefer to use TupleHash256 (specified in SP 800-185.)
2. XOFs: We would recommend SHAKE256
3. Authenticated encryption: We'd suggest AES256-GCM with a random IV.
4. PRFs: Where security proofs can accommodate something that is not indifferentiable from a random oracle, an AES-based seed-expander will offer excellent performance (more details to come). Otherwise, KMAC256 (specified in SP 800-185) will be a good choice.

Also recall, from the CFP: "If the scheme uses a cryptographic primitive that has not been approved by NIST, the submitter shall provide an explanation for why a NIST-approved primitive would not be suitable."


We will soon have more guidance on some of the randomness issues, so that submitters will hopefully not have to concern themselves too much in this regard.


________________________________
From: pqc-forum-bounces at nist.gov <pqc-forum-bounces at nist.gov> on behalf of D. J. Bernstein <djb at cr.yp.to>
Sent: Thursday, July 27, 2017 9:45:44 PM
To: pqc-forum


Subject: Re: [Pqc-forum] post-quantum benchmarking and RNGs

Perlner, Ray (Fed) writes:

More to the point:

---Dan

Reply all
Reply to author
Forward
0 new messages