[lwc-forum] tag length impact on confidentiality security

100 views
Skip to first unread message

MEGE, Alexandre

unread,
Nov 12, 2019, 12:20:52 PM11/12/19
to lightweig...@nist.gov, lwc-...@list.nist.gov

Dear subscribers of the NIST Lightweight Cryptography mailing list,

 

LWC algorithm call specifies a minimum tag length of 64 bits.

I would argue that this tag length is inadequate to guarantee 112 bit confidentiality for algorithms without nonce misuse protection under adaptative forgery scenario.

 

LWC call for algorithm only specifies integrity attacks in the adaptative forgery scenario and confidentiality attacks in the adaptative chosen-plaintext scenario.

I show below that a failure of authentification can be exploited to mount a Nonce misuse attack.

In most LWC algorithm, this nonce misuse attack is catastrophic and leads to loss of confidentiality.

 

Attack description

Attacker scenario :

Adaptive forgery scenario : Eve can call the decryptor with any message except the one sent by Alice. If the message sent to the decryptor has the correct tag, Eve gets the associated plaintext.

 

Attack :

Eve forges a message with nonce Na, ciphertex Ca, and candidate Tag Ta. Eve sends it to the decryptor. By chance or brute force, Eve manages to find a valid tag Ta.

By providing this message to the decryptor, Eve gain access to the full information Nonce, Plaintext, Ciphertext and Tag for this nonce Na.

At another moment, the legitimate user Alice uses this same Nonce Na to encrypt a new message (it is her first message sent with this nonce, so she respect the nonce single use requirement).

Alice sends the legitimate ciphered message (Nonce Nl = Na, Ciphertext Cl, Tag Tl) and it is intercepted by Eve.

With those two messages, Eve has access to enough information to mount a nonce ‘misuse’ attack.

For an algorithm without nonce misuse security, the first plaintext block or the complete plaintext can be recovered by Eve.

 

Conclusion

The tag size effectively limits the confidentiality security of LWC algorithm not protected against nonce misuse in the adaptative forgery scenario.

For algorithms without nonce misuse protection, 64 bit tags limit the confidentiality security to 64 bits in this scenario.

 

Best regards,

Alexandre Mège

 

The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.

Joan Daemen

unread,
Nov 28, 2019, 3:24:56 AM11/28/19
to lwc-...@list.nist.gov
Dear NIST,

For the hash function component of the NIST lightweight submissions you
have required the support for 256-bit hash result. I think it would be
beneficial if NIST would allow candidates to specify a XOF rather than a
hash function, with security claims related to collision resistance,
(2nd) pre-image resistance or strength against other attacks covering
all output lengths.

In many applications the length of the hash result is determined by the
application. Think of full-domain hashing in signatures or the
generation of pseudorandom streams in post quantum schemes. For that
reason, it would be convenient to support arbitrary-length output as is
the case for an extendable output function (XOF) like SHAKE128.

This does not conflict with a 256-bit hash function, as one can truncate
the hash result for shorter digests and apply constructions such as MGF1
for longer ones and provide security claims for the shorter and longer
output lengths.

Kind regards,

Joan Daemen

Markku-Juhani O. Saarinen

unread,
Nov 28, 2019, 5:17:08 AM11/28/19
to lwc-forum
Hello,

I'd like to back this; a NIST-standardized lightweight XOF would be very helpful for many reasons. I may be repeating myself but the main reason is that the upcoming NIST PQC standard primitives are likely to use symmetric cryptography in a completely different way to NIST's older RSA and ECC - based standards. A plain hash function is simply not very useful.

I've seen arguments such as "efficiency does not matter in asymmetric cryptography if it's post-quantum". We need to remember that the 2020s post-quantum transition affects essentially all cryptographic applications, including lightweight ones. Some of the new proposals are more efficient than RSA and ECC; a lightweight NIST XOF could even help asymmetric cryptography to enter even entirely new application areas. This may seem surprising; when I was engineering RSA or ECC applications I certainly didn't have to budget a *majority* of my Joules, cycles or gates on a XOF (or any other symmetric stuff).

Most of the faster candidates in the NIST post-quantum cryptography effort use a XOF (SHAKE) extensively. Truncating a hash function with a natural XOF mode first to, say, 256 bits and then expanding with MGF1 forms an unnecessary complication and significant performance penalty when the said hash/xof is already the energy and performance bottleneck of the entire construction.

Currently there is only one NIST standard XOF algorithm, SHAKE. Fast hardware implementations of the Keccak permutation are quite large, often larger than the implementation of the actual asymmetric portion of the primitive, so hardware PQC designers often resort to using ad hoc solutions based on lightweight stream ciphers. It would also be helpful if the new algorithm is also faster in software; often more of cycles are spent on the Keccak permutation than anything else. 

Cheers,
- markku

Dr. Markku-Juhani O. Saarinen <mj...@pqshield.com> PQShield, Oxford UK.

D. J. Bernstein

unread,
Nov 29, 2019, 5:12:34 AM11/29/19
to lwc-...@list.nist.gov
I like the idea of designing hash-function output lengths far beyond the
target security levels. As far as I know, the first public request to
standardize such hash functions was in an April 2007 message "Silly ties
between security and output length" that I sent to hash-...@nist.gov:

Suppose I want to generate a 2048-bit hash for 2048-bit RSA-FDH. It
would be stupid of me to worry about hash-function attacks taking
time 2^2048, or 2^1024, or even 2^256; an attacker who can perform
2^256 operations can easily factor my 2048-bit RSA modulus.

I could choose a 256-bit hash providing 128 bits of conjectured
security against all the relevant attacks, and then concatenate 8
different hash outputs to obtain my 2048-bit result. These 8
hash-function computations produce a very noticeable slowdown in
RSA-FDH verification; in fact, if I replace RSA by state-of-the-art
Rabin-Williams variants, I end up spending _most_ of my time on the
hash-function computations.

As a designer I [see] no reasons to think that a 512-bit hash
function at a 128-bit security level has to take twice as long as a
256-bit hash function at a 128-bit security level ... [next message:]
For example, the Grindahl-256 hash function presented at FSE 2007 is
actually a 416-bit hash function designed conservatively for 128-bit
security---and then truncated to 256 bits. The original 416-bit hash
function is clearly worth considering for RSA.

If we're using a sponge for the RSA-FDH example, we can reduce rounds
for squeezing additional blocks, since there's no adversarial input. Or
we can expand a 256-bit hash with AES-CTR, and figure out how many
rounds of AES-CTR are really needed for RSA-FDH. Or we could use a more
efficient large-block design with better diffusion.

_However_, I'm not at all sure that it's a good idea to suddenly change
the rules of the lightweight competition to emphasize this use case (or
the similar post-quantum use cases). I think this would need careful
consideration of how important long-output lightweight hash functions
really are, versus how much distraction this would bring to security
reviewers and implementors. Asking each team to specify a XOF mode is
going to create extra work and extra errors; is this really what we want
submissions judged on? If all the work is done correctly, will it end up
changing the decisions of which lightweight submissions to standardize?

https://eprint.iacr.org/2019/844.pdf shows a post-quantum submission,
kyber1024, spending 70% of its time on hashing. I cherry-picked this
example to make hashing inside post-quantum crypto sound as important as
possible:

* The example uses a huge hash function with a 1600-bit state.
(NIST's rules made it difficult for people to use smaller hash
functions in post-quantum submissions.)

* The example uses a CPU, the Cortex-M4, that punishes huge hash
functions (there are only 14 usable 32-bit registers) while
providing a battery of single-cycle multiplication instructions
that are useful for some types of mathematical computations.

* The example uses a post-quantum function that takes advantage of
these multiplication instructions. (kyber1024 has the fastest
"level 5" decryption result in the paper, and if hashing is removed
then it's also the fastest for key generation and encryption.)

This leaves many unanswered questions. Does the cost of SHAKE256 here
(roughly a million Cortex-M4 cycles) matter compared to the amount of
data communicated (a 1.5-kilobyte ciphertext plus any amount of user
ciphertext and plaintext)? Does this type of long-output hashing matter
compared to the many uses of minimal hash output lengths? Is long-output
hashing a good fit for the goals of the lightweight competition? Never
mind the questions above regarding the downsides of adding XOFs at this
point in the competition.

Hash functions are used in many different ways. As designers we can see
all sorts of ways that these applications could be made more efficient;
long-output hashes are just one example. This is great for writing
papers, but it's an auditing nightmare, and how often do these papers
show that the extra efficiency matters for cryptographic users?

---Dan
signature.asc

Joan Daemen

unread,
Nov 29, 2019, 9:42:21 AM11/29/19
to lwc-...@list.nist.gov
Dear all,

First of all, thank you Markku and Dan for your reactions. I agree with
most of what both of you say and I share your concern about changing the
rules of this competition half-way.

Still, I think it would be useful for users and cryptanalysts to have
precise security claims for these hash functions when the output length
n different from 256.

If the output is truncated to n<256, it would be interesting to know
what security strength is claimed, at least with respect to collision
resistance or (2nd) pre-image resistance. This could allow the user to
truncate to, e.g., 112 bits in bandwidth-limited applications where
only, say, 2nd pre-image resistance is needed. Without such a claim, the
user has no guidance on whether he/she can truncate at all.

The NIST submission requirements currently do tolerate truncation but
emphasize the output size of 256 bits and seem to consider the
"description of security properties" of truncated hashes as a bonus. I
think for a hash function that has the ambition to call itself
lightweight a precise security claim for any output length is
indispensable, as it allows the user to truncate to the minimum length
that still meets his/her security strength target. I think it is
interesting to have these claims early on as these define the targets
for cryptanalysts, i.e., what the functions will be scrutinized against.
Therefore, it would have made sense to require such claims from the
start. But of course submitters can still formulate such claims now.

For n>256 the formulation of claims is probably less critical but, well,
if you support output lengths beyond 256 bits, you may as well specify a
corresponding claim, right?

Kind regards,

Joan


Joan Daemen

unread,
Dec 2, 2019, 5:56:58 AM12/2/19
to lwc-...@list.nist.gov
Dear NIST,

For the authenticated encryption schemes in the NIST submissions you have not required support for intermediate tags, nor do you mention it in your requirements document. Support for intermediate tags was identified as something useful in discussions during the CAESAR competition. Another way of seeing it is that messages form a session, where a valid tag on each message authenticates the full sequence of messages. I think this is relevant in lightweight applications as it helps keeping messages short by authenticating the context in the form of previous messages in the session.

Therefore, I think it would be beneficial if NIST would make a statement on whether support for sessions is seen as an interesting feature.

On top of being a useful feature from a functional point of view, with the Keccak team we have been arguing for a long time that it also helps protecting against side channel and fault attacks. Let me explain, as inspired by the presentation of Francois-Xavier Standaert at the NIST lightweight workshop in Gaithersburg. In duplex-based authenticated encryption as we defined it at SAC 2011, the only moment that the key is used is at the beginning of the session and the security of the remainder is based on the secrecy of an evolving state that depends on the sequence of all messages. 

In case of unique session keys (e.g. coming from a Diffie-Hellman key exchange or a ratchet as in the Signal protocol), the permutation implementation requires no additional protection against DPA or faults. If the number of times a key is used can be limited, building nonce-respecting implementations combined with the trickling nonce absorbing trick as specified by Taha and Schaumont their paper presented at HOST 2014, limits the attack surface greatly. Having sessions helps in this respect as it reduces the number of computations with a given key.

Kind regards,

Joan Daemen
signature.asc

Thomas Peyrin (Assoc Prof)

unread,
Dec 2, 2019, 11:49:02 AM12/2/19
to Joan Daemen, lwc-...@list.nist.gov

Hi Joan and all,

 

Sorry, I am not sure I fully understood the subtlety of what is a session, but I was wondering: why not simply reusing a new nonce to define your new session (maybe allow part of the nonce to encore your session ID)? This would allow to use the API everyone is currently using. I guess sessions are interesting mostly for sponge-based or stream cipher-based designs, because the cost of initialisation in these designs makes it generally less attractive for the typical lightweight use-case of small messages (so you want to define “sessions” so you can maintain the internal secret state for early release, without having to pay again for initialisation … something the current API wouldn’t allow).

 

I also note that the example you mention with sessions keys can also apply to non-permutation designs (see for example TEDT design https://eprint.iacr.org/2019/137.pdf  also from FX Standaert et al.).

In any case, while I understand that each design can have some features not mentioned by the NIST call, I would like to mention that it is a bit unfair to ask NIST for new feature statements one year after the competition started. Alternatively, should we start a process where every team can propose what they believe should be a clear use-case feature in the NIST competition ?

Regards,

Thomas.

 

De : 'Joan Daemen' via lwc-forum <lwc-...@list.nist.gov>
Envoyé : Monday, 2 December 2019 6:57 PM
À : lwc-...@list.nist.gov
Objet : [lwc-forum] Support for AE sessions AKA intermediate tags

--
To unsubscribe from this group, send email to lwc-forum+...@list.nist.gov
Visit this group at https://groups.google.com/a/list.nist.gov/d/forum/lwc-forum
---
To unsubscribe from this group and stop receiving emails from it, send an email to lwc-forum+...@list.nist.gov.

Gilles Van Assche

unread,
Dec 6, 2019, 7:57:20 AM12/6/19
to Thomas Peyrin (Assoc Prof), Joan Daemen, lwc-...@list.nist.gov
Dear Thomas, dear all,

As an engineer working in smartcards and other embedded security
devices, I think sessions are a very useful concept. They allow
authenticating all the messages that were exchanged so far,
incrementally, and they arise naturally in several secure messaging
protocols. For instance, a card and a terminal take turns when
communicating, and they can share the same session. If the terminal
sends a command and the card gives back a simple confirmation, the MAC
authenticates not just a loose "OK" but also that the confirmation
refers to the command sent by the terminal.

Of course, sessions can be implemented by other means than duplex-
and/or permutation-based schemes. The question is more: do the users of
the future standard lightweight scheme(s) find this useful? Are they
interested in the potential benefits of sessions w.r.t. side-channel
attacks that Joan mentioned?

If so, it may be worth looking beyond the current way of interfacing
authenticated encryption and perhaps converge to something that supports
sessions.

> In any case, while I understand that each design can have some
> features not mentioned by the NIST call, I would like to mention that
> it is a bit unfair to ask NIST for new feature statements one year
> after the competition started. Alternatively, should we start a
> process where every team can propose what they believe should be a
> clear use-case feature in the NIST competition ?

I think it is interesting and useful to keep the discussion alive,
especially since the competition does not consider just one metric, but
a wide variety of criteria. Should other designers have features that
may interest the users of lightweight crypto, why not highlight them
before the standard is selected?

Kind regards,
Gilles

Thomas Peyrin (Assoc Prof)

unread,
Dec 6, 2019, 9:55:59 AM12/6/19
to Gilles Van Assche, Joan Daemen, lwc-...@list.nist.gov
Dear Gilles and all,

Sure, I also do think sessions can be interesting in some situations, my point is just that you probably don’t need to touch the API to realise this. You can simply use part of the nonce to be a counter and to have some ID of the conversation. Now, I understand sponge-based designs would not be so efficient then, but that is a disadvantage of being stateful (of course, in opposition sponges also do have some advantages from being stateful).

What you are requesting is a feature that is basically a patch to help only stateful candidates for this particular use case. Then, why not other patches, like allowing non-stateful candidates to have an optional stateful modes to benefit of advantages of being stateful in some specifically identified situations ?


" If so, it may be worth looking beyond the current way of interfacing authenticated encryption and perhaps converge to something that supports sessions."

I think sessions were already public before the competition, so it would have been a better timing to push for this before the competition, so that all candidates can then propose efficient ways to achieve this feature. Moreover, again I don't think there is a need to change the API to achieve this.


" Should other designers have features that may interest the users of lightweight crypto, why not highlight them before the standard is selected?"

I think before the competition starts would have been a better timing. I fear that with 30+ candidates each pushing for what they believe is important might rapidly be a mess :-/

Regards,

Thomas.

-----Message d'origine-----
De : Gilles Van Assche <gilles.v...@st.com>
Envoyé : Friday, 6 December 2019 8:58 PM
À : Thomas Peyrin (Assoc Prof) <thomas...@ntu.edu.sg>; Joan Daemen <j...@noekeon.org>; lwc-...@list.nist.gov
Objet : Re: [lwc-forum] Support for AE sessions AKA intermediate tags

D. J. Bernstein

unread,
Dec 6, 2019, 11:04:01 AM12/6/19
to lwc-...@list.nist.gov
Thomas Peyrin (Assoc Prof) writes:
> You can simply use part of the nonce to be a counter and to have some
> ID of the conversation.

The session concept says that each message has the tuple of all previous
messages as associated data.

To handle this efficiently with a traditional stream-cipher-plus-MAC
design, you'd want to take the previous MAC as associated data. I'd
guess that this is provable under minor assumptions, and it's probably
good for some non-permutation team to do the work for a proof simply to
disprove the narrative that permutations magically enable this use case.

> I think sessions were already public before the competition, so it
> would have been a better timing to push for this before the
> competition, so that all candidates can then propose efficient ways to
> achieve this feature.

I agree. Suddenly extending the core API partway through the project is
clearly going to be a distraction and a source of errors in security
review and in implementation. What's the compensating advantage? Should
we think that considering hashing+AEADs+sessions+XOFs+... is going to
produce a change in the decisions of what to standardize? Why? Does it
really make sense to suddenly divert resources into studying this?

---Dan
signature.asc

Samuel Neves

unread,
Dec 6, 2019, 11:26:55 AM12/6/19
to lwc-...@list.nist.gov
On Fri, Dec 6, 2019 at 4:04 PM D. J. Bernstein <d...@cr.yp.to> wrote:
>
> Thomas Peyrin (Assoc Prof) writes:
> > You can simply use part of the nonce to be a counter and to have some
> > ID of the conversation.
>
> The session concept says that each message has the tuple of all previous
> messages as associated data.
>
> To handle this efficiently with a traditional stream-cipher-plus-MAC
> design, you'd want to take the previous MAC as associated data. I'd
> guess that this is provable under minor assumptions, and it's probably
> good for some non-permutation team to do the work for a proof simply to
> disprove the narrative that permutations magically enable this use case.

It might be of interest to revisit Hoang et al.'s OAE2 definition and
CHAIN mode [1] for arbitrary authenticated encryption schemes, which
aims for similar goals as the ones being discussed here.

[1] https://eprint.iacr.org/2015/189

Thomas Peyrin (Assoc Prof)

unread,
Dec 6, 2019, 11:48:27 AM12/6/19
to D. J. Bernstein, lwc-...@list.nist.gov
Hi Dan,

I see thanks. Indeed, I believe there would still be some solution to provide exactly this session feature (using the previous tag as AD for next part of the session perhaps ?). Also, requiring sessions to maintain a state might also be a problem in practice: you will need all devices to maintain a state *securely* with all the devices they are currently communicating with ... does this really fit the lightweight scenario ?

As a side note, sessions are already done with counter + session ID in nonce in many protocols, albeit not with the feature that "each message has the tuple of all previous messages as associated data".

Regards,

Thomas.

-----Message d'origine-----
De : D. J. Bernstein <d...@cr.yp.to>
Envoyé : Saturday, 7 December 2019 12:04 AM
À : lwc-...@list.nist.gov
Objet : Re: [lwc-forum] Support for AE sessions AKA intermediate tags

VIZÁR Damian

unread,
Dec 9, 2019, 3:13:58 AM12/9/19
to lwc-...@list.nist.gov
Dear all,

I would like to contribute my five cents to the discussion.

First of all, I think that "secure sessions" are an interesting feature both from a research and an applied perspective. As such, they can surely be very handy in certain applications, some of which also fall under the umbrella of "lightweight" applications. An example would be the card payment scenario described by Gilles.

That being said, I do not believe that this feature ought to be officially endorsed by NIST at this time, for several reasons.

1) Relevance. There are many applications where secure sessions aren't applicable. For example, in a lot of applications with wireless communication and a low energy budget, it will be less of a problem to risk maliciously deleted messages than to guarantee delivery, as guaranteed message delivery can be very expensive. Yet the AE with secure sessions (which authenticates full message history, as described by Joan) will refuse to decrypt as soon as a message is dropped from the channel, be it maliciously or not. Essentially any wireless device that sends regular status updates (a parking sensor, connected thermometer etc), or streams data (a drone video feed) can serve as an example.
Do the session friendly applications largely outnumber the others in practice? If so, labelling the secure sessions as a desirable feature makes sense, because it would steer our effort to meet the demand. But it is precisely this information about lightweight applications (which design trade-offs and features are the most frequently required) that we are lacking.

2) Security goals. The concept of secure sessions can have several flavors, corresponding to several formal definitions of varying strength. As argued, guaranteeing integrity of a complete session can be an overkill sometimes; why not require simple message freshness (achievable by a counter in the nonce and >= check before decryption) instead? On the other hand, is full session integrity a sufficient goal when going after stronger guarantees? Wouldn't "best possible" security (as defined in the paper by Hoang et al. pointed out by Samuel) be a more appropriate goal? Choosing the right security goal to be prioritized again depends on the demand, which we do not really know.

3) Standardization process integrity. So far, the rules of the LWC project have been rather conservative: the teams were not allowed to officially tweak their submissions on the algorithmic level. An official statement labelling a particular feature as desirable would effectively modify the official design objectives but under current rules, the teams would not be able to adapt to this change.

I would like to repeat that I consider secure sessions, and other advanced features valuable but we are lacking the supporting information to prioritize these features and/or the resources to analyze significant updates of the candidates (which is not ideal, I agree).

Best regards,
Damian
Reply all
Reply to author
Forward
0 new messages