Hi,
We had a paper [1-2] at the Fourth NIST PQC Standardization Conference in 2022 asking for more work on Non-Interactive Key Exchange (NIKE). I am very happy to see work on Swoosh [3], which is the first “practical” lattice-based NIKE. I think it is very nice work that hopefully inspire more work on quantum-resistant Non-Interactive Key Exchange.
I think the most practical need for NIKE is in constrained IoT radio networks. In most other use cases you can just re-design your protocol to use ML-KEM. The IETF protocols EDHOC [4] and Group OSCORE [5] both use ECDH in ways not possible with a KEM. They have quite many implementations, a lot of industry interest, and will soon be published as standard track RFCs. Message size is typically much more constrained than CPU cycles in constrained IoT radio networks. Constrained radio networks are growing quickly and using ML-KEM or ML-DSA in the more constrained ones are not feasible at all. My dream would be to have something with key sizes like CSIDH-512 [6], but faster, standardized sometime in the future.
It is a bit sad that performance papers for CSIDH like [7] investigates TLS where sizes are not very constrained. The example “constrained environments, such as 46 kbps IoT networks” mentioned in [7] is also not very constrained at all. A comparison of CSDIDH and ML-KEM in e.g., LoRaWAN where frame sizes are typically 11 or 51 bytes and devices are required to sleep an hour after sending a few frames might give completely different latency results.
While there seems to be consensus of the security of CSIDH against classical attacks, there does not seem to be consensus on the security against quantum attacks. My understanding is that some researchers are claiming that CSIDH-2048 or even CSIDH-4096 is needed to reach NIST security level 1 while the authors of CSIDH seems to stick to the original security claims, at least I have not seen any statement from the authors saying otherwise. CSIDH security was discussed this week at IETF 118. I would love to hear some more views on this.
Cheers,
John Preuß Mattsson
[2] https://csrc.nist.gov/Presentations/2022/constrained-radio-networks-small-ciphertexts
[3] https://eprint.iacr.org/2023/271
[4] https://datatracker.ietf.org/doc/draft-ietf-lake-edhoc/
[5] https://datatracker.ietf.org/doc/draft-ietf-core-oscore-groupcomm/
[6] https://eprint.iacr.org/2018/383
Op 10 nov 2023, om 14:17 heeft 'John Mattsson' via pqc-forum <pqc-...@list.nist.gov> het volgende geschreven:It is a bit sad that performance papers for CSIDH like [7] investigates TLS where sizes are not very constrained. The example “constrained environments, such as 46 kbps IoT networks” mentioned in [7] is also not very constrained at all. A comparison of CSDIDH and ML-KEM in e.g., LoRaWAN where frame sizes are typically 11 or 51 bytes and devices are required to sleep an hour after sending a few frames might give completely different latency results.
Thanks for opening up this discussion, I think it’s one that we’ve maybe not talked enough about (even if I expect the outcome to be that we’re in a painful situation right now with no good way out).
Yes, big thanks for starting this discussion! It’s a very important topic, though, admittedly, a large fraction of the “established” Internet is not impacted and does not care about it.
It is a bit sad that performance papers for CSIDH like [7] investigates TLS where sizes are not very constrained. The example “constrained environments, such as 46 kbps IoT networks” mentioned in [7] is also not very constrained at all. A comparison of CSDIDH and ML-KEM in e.g., LoRaWAN where frame sizes are typically 11 or 51 bytes and devices are required to sleep an hour after sending a few frames might give completely different latency results.
As I also wrote the “paper mentioned” [8], I would like to add to this that there is a bit of a tension here, as I have the impression that devices that use extremely constrained transports like LoRaWAN are generally also extremely constrained in battery life and computational power. CSIDH does not do very well in those dimensions either…
Not necessarily – there are IoT devices that from practical point of view are only constrained by bandwidth (and related timing).
But yes – CSIDH is pretty bad for both CPU/battery and timing/performance. NIST standards are the opposite: good on battery and CPU, but extremely bad on bandwidth…
I am generally very interested in the performance of post-quantum protocols and how we can design variants of protocols to play to the strengths of the post-quantum primitives that we have, so I’m very interested in continuing this discussion and would welcome further input on protocols and network environments to consider.
By all means, let’s continue. Your AuthKEM is a step in the right direction (in the spirit of MQV/HMQV 😉). We streamlined it for other-than-TLS applications (shameless plug: Compact Authenticated Key Exchange 😉), but for LoRaWAN it still may be too bandwidth-hungry.
I’d like to see a KEM (or KA – in my environment we can control the usage to not really worry about IND-CCA 😉) with sizes of CSIDH and performance of Kyber. 😃
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/0F958A79-4F06-4095-B31A-67BC5D02366B%40thomwiggers.nl.
Op 10 nov 2023, om 15:17 heeft Blumenthal, Uri - 0553 - MITLL <u...@ll.mit.edu> het volgende geschreven:
I’d like to see a KEM (or KA – in my environment we can control the usage to not really worry about IND-CCA 😉) with sizes of CSIDH and performance of Kyber. 😃
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/04C3C5FA-A4A6-41C5-9070-6B6D4E488EBF%40thomwiggers.nl.
I fully agree with Wrenna Robson.
There absolutely exist usecases where “Cost of CRQC attack” > “Value of thing being attacked”, or where “Time to perform CRQC attack” > “Lifetime of key / data”.
It’s probably true that these don’t get discussed enough.
That said, I think the analysis of whether this is appropriate falls into the bucket that Thom mentioned: “Speak with your cryptographer to see if staying on ECDSA is right for you!”, so it seems like it’s a bit out-of-scope for the [pqc-forum]; we could consider writing a guidance document about this, but of course that guidance will be change over the cost and time of CRQC attacks get better.
---
Mike Ounsworth
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CANuKc3hChjcT9DgSXRcCuNbwEFB5XRo%3DCUobYHzUeudrUXvJ8Q%40mail.gmail.com.
I fully agree with Wrenna Robson.
I don’t.
There absolutely exist usecases where “Cost of CRQC attack” > “Value of thing being attacked”, or where “Time to perform CRQC attack” > “Lifetime of key / data”.
Oh, absolutely! Nobody’s (AFAIK) arguing with this. The point is – there are valid use cases where “Cost of CRQC attack” < “Value of thing being attacked” and/or “Time to perform CRQC attack” < “Lifetime of data” (omitting “key” because the only value of an old key is that it unlocks an old-but-still-valuable data).
It’s probably true that these don’t get discussed enough.
Perhaps, there’s nothing to discuss – if your data is highly unlikely to remain valuable by the time CRQC arrives, just continue “business as usual”, no need to transition to anything, just enjoy the small size and decent performance of ECDH(E).
That said, I think the analysis of whether this is appropriate falls into the bucket that Thom mentioned: “Speak with your cryptographer to see if staying on ECDSA is right for you!”,
I’m far more concerned of Key Exchange, than of (dynamic) Signature. I can accept the risk of somebody authenticating as myself-10-years-ago to a peer that existed 10 years ago. But quite a bit of my data needs to remain confidential 10 (and more) years from now.
so it seems like it’s a bit out-of-scope for the [pqc-forum]; we could consider writing a guidance document about this, but of course that guidance will be change over the cost and time of CRQC attacks get better.
Frankly, I don’t know and I don’t care. If there’s a good reason to believe that your data “lifetime” (aka, period during which it remains valuable) won’t “outlive” appearance of CRQC – then you don’t have to bother with PQC.
The only person who can guide you here is one that understand the value “trajectory” of your data, and whose crystal ball is accurate “enough” to predict how much time we have before CRQC is a reality.
This is perhaps a more general question than is necessary for this thread, but: for some applications, it's difficult for me to imagine the world where quantum security actually matters for them. Clearly I don't mean everyday communication
This is perhaps a more general question than is necessary for this thread, but: for some applications, it's difficult for me to imagine the world where quantum security actually matters for them.
Oh, sure. But why would those apps and their developers even be here?
I'm asking this question with genuine curiosity. The cryptographer in me says "no no, you need everything to be as secure as possible, you can't make exceptions". But is that actually a reasonable ask? What is the actual projected threat here? I hope this isn't too naive a question.
I’d say “… you need everything relevant to be as secure …” Caveat: there could/would be things whose value is low, but they touch something more valuable – and can serve as a “backdoor”.
I’d like to see a KEM (or KA – in my environment we can control the usage to not really worry about IND-CCA 😉) with sizes of CSIDH and performance of Kyber. 😃
It’s hard to disagree with wanting this, but perhaps for the purposes of having a more useful discussion we should focus on the standards and proposed algorithms that are available today; if someone invents a very fast AND quantum-resistant AND very small key exchange or signature algorithm I’m sure that they’d be very eager to submit it to a cryptography conference to get that “best paper” award (and submit it to this mailing list soon after).
Well, yes, it was somewhat tongue-in-cheek.
However, within the existing algorithms, there is some discussion to be had about the security margin. For example, in *extremely limited* use cases, algorithms with lower query counts or smaller-but-still-large-enough security margins than 128 bits might be interesting to discuss.
Surfacing those use cases might be helpful to algorithm designers. One use case that I am thinking of would be disposable, one-time-use public transport chipcards (e.g. MiFare ultralights are/were used in Dutch train tickets, though without cryptography [0]). 80 bits of security seems “plenty"? Another example where I know slightly-lower-security is used is the Apple Find My network, which uses P-224 to fit into bluetooth packets [1].
I agree.
😉
Hi,
Thom Wiggers wrote:
>I have the impression that devices that use extremely constrained transports like LoRaWAN >are generally also extremely constrained in battery life and computational power.
Yes, and additionally they are also very constrained regarding memory and storage. Sending, transmitting, and listening on radio requires relatively much energy compared to CPU cycles, so you can often save energy by spending CPU cycles on reducing message sizes. If you only have to do asymmetric operations during bootstrapping, you can justify quite high energy cost for this one-time operation for increased security or ease of deployment.
Thom Wiggers wrote:
>However, within the existing algorithms, there is some discussion to be had about the >security margin. For example, in *extremely limited* use cases, algorithms with lower >query counts or smaller-but-still-large-enough security margins than 128 bits might be >interesting to discuss.
Yes, I think that is a good approach. NIST LWC did for example only require 112-bit security, even though the winner Ascon do provide 128-bit security. I would like to see a future LWC KEM/NIKE project/competition like LWC/CFRG curves/eSTEAM/PHC/CEASAR.
Wrenna Robson wrote:
>Clearly I don't mean everyday communication online, easy to see why that needs to be. But >even if a cryptographically relevant quantum computer exists in 15-20 years, is it really >realistic that the organisation with access to it will use it to forge train tickets? How at-risk >in general are small IoT devices - what exactly is the threat model here?
Mike Ounsworth wrote:
>There absolutely exist usecases where “Cost of CRQC attack” > “Value of thing being >attacked”, or where “Time to perform CRQC attack” > “Lifetime of key / data”.
This is a very good discussion to have. An attacker wants the expected value to be higher than the cost. For some industry systems you might be able to estimate the value. For web technologies that is harder as a lot of different things are protected by the same protocol. The average everyday communication protected with a single ECDHE key has very little value (like this public mail), clearly less than a train ticket.
Currently we don’t know if there will ever be any CRQCs. If CRQCs are ever built they will likely be very expensive (1 billion dollars each?) and initially only be used for breaking high value target. But it cannot be ruled out that cheap mass produced CRQCs might eventually exist. That will however likely take many decades after the first CRQC is constructed.
Cheers,
John Preuß Mattsson
Hi,
Peter Schwabe wrote:
>It would be very useful to have one concrete example, with full open
>specification, of an application that is very constraint by message
>sizes and needs (post-quantum) key agreement. This could be an
>application of TLS on top of a very constraint network or some other
>protocol. Ideally of course this would be a widely used application or
>protocol.
>
>I have in the past tried to find such examples that can be used as a
>target for optimization in public research, but essentially all answers
>I got were along the lines of "these protocols and applications exist,
>but we cannot tell you any details".
IoT often use DTLS, but a _very_ constrained radio network would likely not use DTLS. Even with Raw Public Keys a single handshake is 900 bytes without fragmentation (and there would be a lot of fragmentation).
https://www.ietf.org/archive/id/draft-ietf-iotops-security-protocol-comparison-03.html
Some not constrained examples of NIKE that is deployed in the real-world include Wireguard (which use Noise IK), X3DH, and the Diffie-Hellman Ratchet. A relatively good real-world test for NIKE would be a person in a car/train using the Diffie-Hellman Ratchet on their mobile phone in a country where mobile coverage is very bad e.g., Germany where the connection typically switches between 4G, EDGE, and No connection.
A constrained example of NIKE that is deployed in the real-world is EDHOC (which is similar to Noise XX). One of several examples of where EDHOC is used is 6TISCH (IPv6 over the TSCH mode of IEEE 802.15.4e) join process. I will come back with more concreate details of the constrained lower layers were EDHOC and/or static-static ECDH is used and how to simulate them.
https://ieeexplore.ieee.org/document/9756207
Unless the quantum computer industry dies (which is possible) I think everything non-constrained will use quantum-resistant algorithms in a few decades. For very constrained systems the question is more what is possible. I have met a lot of IoT people that thought they needed to use PSK authentication and key exchange to get small messages. When told that EDHOC can do it with asymmetric crypto in 101 bytes, nobody wants to use PSK anymore. I think the same is true for quantum-resistance. If there exist a NIKE with 512-bit keys and performance like ML-KEM everybody wants quantum-resistance.
Cheers,
John Preuß Mattsson