1,337 views

Skip to first unread message

Jul 24, 2024, 2:24:37 PMJul 24

to pqc-co...@nist.gov, pqc-...@list.nist.gov

Summary: This comment points out a mathematical explanation for _why_

Classic McEliece has much smaller ciphertexts than, e.g., Kyber.

Context: The primary motivation for this KEM is security, but a frequent

observation is that this KEM has much smaller ciphertexts than any of

the alternatives. An interesting consequence is that the minimum network

traffic for frequently used long-term post-quantum public keys (e.g.,

server identity keys) is achieved by this KEM. Various post-quantum

deployments, such as Rosenpass, would lose efficiency if they switched

the long-term keys from this KEM to alternatives.

The underlying cryptanalytic facts are that, for any particular

ciphertext size, the ranking of costs of known attacks is as follows:

* Fastest: attacking ciphertexts or keys for Kyber, NTRU, etc.

* Much slower: attacking McEliece ciphertexts.

* Slowest: attacking McEliece keys.

People often wonder whether this means that the McEliece system hasn't

been studied: i.e., whether further study would eliminate the McEliece

ciphertext-size advantage. Normally this question is answered with a

pointer to the long history of McEliece attack papers. I'm filing this

comment to highlight a different answer.

I've recently given a talk with a unified description of code/lattice

cryptosystems, pinpointing what's different about the McEliece system:

https://cr.yp.to/talks/2024.07.17/slides-djb-20240717-mceliece-4x3.pdf

Attacks against all of these cryptosystems can be viewed as finding a

lattice vector close to a target point. Barak similarly commented in

https://eprint.iacr.org/2017/365

on the "family" of "coding/lattice" systems, and claimed that "all the

known lattice-based public-key encryption schemes can be broken using

oracle access to an O(√n) approximation algorithm for the lattice

closest vector problem". A closer look shows, however, that it's easy

to write down systems where that level of approximation doesn't break

the system. For McEliece, the gap is only polylogarithmic: the distance

d from the target to the lattice is so high that exponentially many

lattice points are within distance d*polylog of the target.

The fundamental reason for this difference is that, for any particular

ciphertext size, keygen and dec use a quantitatively much more powerful

decoder for McEliece than for the alternatives. Simple attacks that

exploit the low distances allowed by the alternative decoders are faster

than state-of-the-art attacks against the much larger distances allowed

by the Goppa decoder used by McEliece.

It's also worth noting that the structure of the alternative decoders is

the basic reason that the weakness of the alternatives is shared between

ciphertexts and keys. In principle the alternatives can strengthen ther

keys (see, e.g., https://eprint.iacr.org/2013/004), but this costs size

(https://link.springer.com/chapter/10.1007/978-3-319-11659-4_2), making

ciphertexts even easier to attack for any particular ciphertext size.

---D. J. Bernstein

Classic McEliece has much smaller ciphertexts than, e.g., Kyber.

Context: The primary motivation for this KEM is security, but a frequent

observation is that this KEM has much smaller ciphertexts than any of

the alternatives. An interesting consequence is that the minimum network

traffic for frequently used long-term post-quantum public keys (e.g.,

server identity keys) is achieved by this KEM. Various post-quantum

deployments, such as Rosenpass, would lose efficiency if they switched

the long-term keys from this KEM to alternatives.

The underlying cryptanalytic facts are that, for any particular

ciphertext size, the ranking of costs of known attacks is as follows:

* Fastest: attacking ciphertexts or keys for Kyber, NTRU, etc.

* Much slower: attacking McEliece ciphertexts.

* Slowest: attacking McEliece keys.

People often wonder whether this means that the McEliece system hasn't

been studied: i.e., whether further study would eliminate the McEliece

ciphertext-size advantage. Normally this question is answered with a

pointer to the long history of McEliece attack papers. I'm filing this

comment to highlight a different answer.

I've recently given a talk with a unified description of code/lattice

cryptosystems, pinpointing what's different about the McEliece system:

https://cr.yp.to/talks/2024.07.17/slides-djb-20240717-mceliece-4x3.pdf

Attacks against all of these cryptosystems can be viewed as finding a

lattice vector close to a target point. Barak similarly commented in

https://eprint.iacr.org/2017/365

on the "family" of "coding/lattice" systems, and claimed that "all the

known lattice-based public-key encryption schemes can be broken using

oracle access to an O(√n) approximation algorithm for the lattice

closest vector problem". A closer look shows, however, that it's easy

to write down systems where that level of approximation doesn't break

the system. For McEliece, the gap is only polylogarithmic: the distance

d from the target to the lattice is so high that exponentially many

lattice points are within distance d*polylog of the target.

The fundamental reason for this difference is that, for any particular

ciphertext size, keygen and dec use a quantitatively much more powerful

decoder for McEliece than for the alternatives. Simple attacks that

exploit the low distances allowed by the alternative decoders are faster

than state-of-the-art attacks against the much larger distances allowed

by the Goppa decoder used by McEliece.

It's also worth noting that the structure of the alternative decoders is

the basic reason that the weakness of the alternatives is shared between

ciphertexts and keys. In principle the alternatives can strengthen ther

keys (see, e.g., https://eprint.iacr.org/2013/004), but this costs size

(https://link.springer.com/chapter/10.1007/978-3-319-11659-4_2), making

ciphertexts even easier to attack for any particular ciphertext size.

---D. J. Bernstein

Jul 25, 2024, 3:06:46 PMJul 25

to pqc-forum, D. J. Bernstein, pqc-...@list.nist.gov, pqc-co...@nist.gov

So

McEliece: "maximal" error with fancy algebraic decoder,

Kyber: simple modulation and rounding ("compress/decompress").

Maybe this question is naive or ignorant, but isn't there some way to improve LWE schemes by, e.g., first encoding a message for error-correction, then using larger noise, or maybe replacing some part of the "noisy Elgamal"-style scheme with something inspired by coding theory? I think I asked this somewhere else and was told that this had been considered and there wasn't any good trade-off, but I don't remember.

Jul 25, 2024, 5:27:20 PMJul 25

to Bobby McGee, pqc-...@list.nist.gov, D. J. Bernstein, pqc-co...@nist.gov

Hi Bobby,

It’s been tried, but as far as I know, the decoders are not nearly as powerful

as McEliece’s Goppa decoder.

In the KEM standardization process, at least LAC, Hila5 (later was merged into

Round5) and ThreeBears were using some form of error correction, and at least

early versions of NewHope did as well. This gave them slightly smaller

parameters at a cost in complexity.

You could of course push things farther. As an extreme example, I've tried

various combinations of RLWR and somewhat more powerful error correction

(E8+large-field LDPC) under the "Glowstick" family of toy schemes. These are

complete toys, and do not attempt to achieve negligible decryption failure,

CCA security, or side-channel resistance: the idea is just to explore how small

you can make the public key and ciphertext by using a stronger code (still not

as strong a code as Goppa, but also it's a soft-decision code). In this case

you can achieve 32 + 243 + 323 bytes for the nonce + public key + ciphertext

for "doubtfully 128-bit” parameters, or 32 + 307 + 403 bytes for “more

probably 128+-bit” parameters, etc.

A more skilled coding practitioner could probably go even farther with this.

These schemes do not approach McEliece’s ciphertext size, but of course the

public key is much smaller.

Regards,

-- Mike

--

You received this message because you are subscribed to the Google Groups "pqc-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/8c3047d5-f767-4c57-a4ac-929f29fabf7dn%40list.nist.gov.

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu