Hi,
Japanese CRYPTEC published some great graphs [1-2] on the difficulty of integer factorization and ECDLP using supercomputers from the TOP500 list. The graphs have 30 years of data on supercomputer performance with trend lines, factorization and ECDLP records, and estimated complexity required to break RSA-3072 and P-256 in one year. Extrapolating (assuming Moore’s law will not slow down in the next 70 years), an estimate is that it takes until around 2090 before it’s possible to recover a P-256 or RSA-3072 key using one year processing time on the world’s fastest classical supercomputer.
I think the right practical question to ask is if it is worth finding a private key, not if a nation state with a Manhattan/Apollo sized project could. The currently fastest supercomputer Frontier is estimated to have cost US$600 million and have yearly energy cost of US$20 million. Assuming a lifetime of 5 years, the cost for one year of operation is US$140 million. Very few, if any, keys are worth that. Multi-key attacks finding one of many keys are typically much less interesting for an attacker. Multi-key attacks that can find all keys like [3] are however very interesting for attackers but are at least as hard as finding a single key.
My understanding is that lattice sieving attacks on ML-KEM-512 have time complexity comparable to Pollard’s Rho on P-256 but much larger memory complexity (which is a more expensive and limiting resource). Given the exponential memory requirements of current lattice attack algorithms, ML-KEM-512 seems more costly to break than P-256 using classical supercomputers. Grover’s algorithm with its quadratic (n2) speedup will not provide any practical quantum advantage as at least cubic (n3) or quartic (n4) speedups are required for a practical quantum advantage [4-5].
Assuming an extreme case where the above calculations overestimate the real-world price-performance ratio with five orders of magnitude compared to using proprietary ASIC (as Bernstein et al. estimates for AES-128 [6]) only cuts around two decades from the above numbers.
Given this I see little reason at all to be worried about ML-KEM-512 practical security. ML-KEM-512 seems very appropriate to use in most use cases where performance is important. I am sure we will see smaller optimizations in lattice attack algorithms but overall, they seem to be relatively stable [7].
[1] https://www.cryptrec.go.jp/symposium/2023_cryptrec-eval.pdf
[2] https://events.btq.li/Japan_CRYPTREC_Activities_on_PQC_Shiho_Moriai.pdf
[3] https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf
[5] https://arxiv.org/pdf/2011.04149.pdf
[6] https://cat.cr.yp.to/cryptattacktester-20231020.pdf#subsection.F.11
[7] https://eprint.iacr.org/2021/785
Cheers,
John Preuß Mattsson
D. J. Bernstein wrote (in another thread):
https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/W2VOzy0wz_E/m/0P1em5h0BAAJ
>For translating bit operations into dollars, a reasonable starting point
>is Bitcoin, which did roughly 2^111 bit operations last year. That's the
>equivalent of a few billion dollars of mass-market Bitcoin-mining
>machines running for a year. A large-scale attacker will be able to
>reach similar economies of scale and has a much larger yearly budget.
>
>Recovering an AES-128 key, out of a billion target keys encrypting a
>known plaintext, takes roughly that number of bit operations, so it's
>feasible for a large-scale attacker.
>
>(The attacker obviously won't use commercial CPUs or FPGAs for such a
>large computation. See the five-order-of-magnitude gap analyzed in
>https://cat.cr.yp.to/cryptattacktester-20231020.pdf#subsection.F.11.)
>
>Recovering a Curve25519 key, out of any number of target keys, takes
>about 2^145 bit operations: i.e., billions of times more difficult. This
>_does not_ mean that an attacker with billions of dollars to spend won't
>be able to break Curve25519 for billions of years: computer technology
>is continuing to become more and more cost-effective, and I wouldn't bet
>against this giving another 2^30 within the foreseeable future. But
>quantum computers look like they'll easily win this race.
>
>(One can always find optimists claiming that chip improvement has come
>to an end and that quantum computing will fail. However, risk management
>requires looking at the bad scenarios. This is why we're working on
>post-quantum cryptography _before_ seeing demonstrations of anything
>useful being accomplished by quantum computers.)
>
>For Kyber-512, the latest Kyber documentation had a security analysis
>from 2020 ending up with a very wide uncertainty range for the costs of
>high-probability single-target "primal" attacks: between 2^135 and 2^165
>bit operations, not counting costs of memory access. Optimists then say
>that
>
>* there won't be improved single-target attack algorithms,
>* multi-target attacks won't do better than single-target attacks,
>* attacks against Kyber won't do better than these MLWE/MLWR attacks,
>* the costs of memory access will add a lot to the 135, and
>* 135, as the low end, has negligible chance of occurring anyway.
>
>However:
>
>* There are already undisputed improvements (plus some disputed
>improvements), the easiest being clumping, which by itself moves
>the entire [135,165] range down by nearly 10 bits. The issue here
>isn't simply that the range for known attacks needs to be updated;
>the big issue is that lattice attacks are complicated and haven't
>stabilized. This needs to be factored into risk analyses.
>
>* There's an ACM CCS 2021 paper briefly outlining a proof that
>one-out-of-many attacks for MLWE are as hard as single-target
>attacks, but the proof has a flaw pointed out and exploited in
>https://cr.yp.to/papers.html#lprrr. The main analysis in the latter
>paper is asymptotic; quantifying the concrete impact on Kyber-512
>will take more research. This is also something where better
>attacks should be expected---within limits, since a 2^40-target
>attack can't drop security levels by more than 40 bits, but those
>limits aren't comforting for Kyber-512.
>
>* Even if QROM IND-CCA2 security is assumed to cover all IND-CCA2
>attacks (which isn't necessarily the case, as discussions of a
>recent NSA proposal illustrate), current proofs don't rule out the
>possibility of Kyber's QROM IND-CCA2 security being much easier to
>break than the underlying MLWE/MLWR problems. The Kyber security
>analysis relies on hope that this proof gap doesn't matter, rather
>than on cryptanalysis.
>
>* The only quantified claim regarding the costs of memory access in
>Kyber-512 attacks comes from NIST incorrectly multiplying the cost
>of each memory access by the number of bit operations in an attack.
>A correct analysis instead multiplies the cost of each memory
>access by the number of memory accesses (a number that doesn't
>appear in NIST's analysis), and searches for attacks optimizing the
>total cost. This is something else where more research is required.
>
>* I'm not aware of attempts to quantify probabilities for the
>components of the uncertainty range in the Kyber documentation, so
>there's no way to derive a quantitative conclusion regarding the
>probability of the low end occurring---never mind the question of
>whether a 25% or 10% or 5% chance of disaster is acceptable.
>
>Some of the alarm bells going off here are already recognized in NIST's
>general statements regarding procedures. For example, NIST said that
>breaking a single AES-128 key (this is closer to the Curve25519 example
>than to the one-out-of-a-billion-AES-128-keys example) is a "floor" for
>security, said that a replacement for gate counts "must at minimum"
>ensure (among other things) that "all known attacks have been optimized
>with respect to the proposed metric", and said that "submitters of
>algorithms where the complexity of the best known attack has recently
>decreased significantly, or is otherwise poorly understood, should be
>especially conservative" in "category" assignments.
D. J. Bernstein wrote:
>So it's okay if an attacker can break all keys for just $700 million?
I don’t think that is okay if the algorithm is used globally like FFDH-1024 was. The largest security agencies could consider $700 million worth it, but for ML-KEM-512 we are very far from that. Today it would be completely infeasible to break a single ML-KEM-512 key even if you spent much more money than $700 million and as far as I'm aware there are no lattice attacks that can recover all the keys. I would love to see more precomputation research like [1] for lattice, code, multivariate, hash, and isogenies. When these kinds of attacks are possible you clearly need a larger security margin, and maybe it is a good conservative approach to assume that finding all the keys is as easy as finding a single key unless there are strong reasons to believe otherwise. FFDH-1024 was clearly used too long. But the main problem was not NIST/BSI/ANSSI requirements, but that people did not follow them. NIST only allowed 80-bit crypto until 2010 minus the lifetime the data needed protection. NIST requirement to only allow 112-bit crypto until 2030 minus the lifetime the data needed protection is much more conservative. But given that many people seem to ignore the lifetime part in NIST’s requirements, maybe simpler requirements like BSI’s don’t use 112-bit crypto after 2022 are better. That should give 40 years of secrecy. But simple requirements create problems as well. I think RSA-2048 is fine to use for web server authentication beyond 2022.
>It's instructive to
>look back at how RSA Labs was arguing in 2000 that RSA-768 was safe, and
>then re-read current arguments for the safety of Kyber-512.
In addition to promoting too short keys lengths, RSA have done other bad things like spreading doubt about ECC for business purposes [2] as well as implementing Dual_EC_DRBG for profit [3]. RSA-768 was already possible to break on the fastest commercial supercomputers in 2000 [4]. Even with 2^135 operations, assuming Moore's law will continue, and ignoring memory requirements, ML-KEM-512 will not be possible to break until 2080 on commercial supercomputers. I don't see the connection. I think it might be more instructive to look back at the SHA-3 competition and how NIST ended up standardizing the fixed-length SHA-3 functions with meaningless high pre-image security significantly hurting their performance and leading to them not being deployed.
>See https://blog.cr.yp.to/20151120-batchattacks.html for analysis of
>how the "attacker economist" approach goes wrong.
I think most approaches goes wrong from time to time. I think not using any approach is the main problem. You can also argue that the “theoretical cryptographer” approach has gone wrong in the past with slower performance, lower security, increased cost, and increased CO2 emissions as a result. Upgrading existing systems can be very costly and using slower algorithms than needed on large scale lead to significant energy use and CO2 emissions. The unfounded conspiracy theories of backdoors in NIST P-curves led many people to doubt ECC and stay on FFDH and RSA. I also know of developers that disabled P-256 after reading https://safecurves.cr.yp.to/. As they did not have Curve25519 implemented the result was that they only supported older FFDH based key exchange with much lower security than P-256.
Following the same conservative approach as for 112-bit crypto you can use 128-bit crypto until at least 2060. The practically weakest part of 128-bit crypto is as you write multi-key attacks against AES-128 but I don’t think multi-key attacks finding one of many keys are very interesting for practical attackers and best practice today is to use randomness in the nonce to increase the complexity of multi-key attacks. The nonces in AES-128-GCM in TLS 1.3 e.g., use 96 bits of randomness, which is great. I think that should be the recommendation rather than stop using AES-128. But arguing that AES-128 should be phased out because of classical attacks like you do in [4] makes a lot more sense to me than arguing that it should be phased out because of Grover’s algorithm.
Talking about cost there is a large difference between existing systems and new systems. NIST typically does not any difference between the two. In the mobile industry, 6G will e.g., use 256-bit keys but 4G and 5G will likely continue to live for decades with 128-bit keys, which I think is perfectly fine.
>> but much larger
>> memory complexity (which is a more expensive and limiting resource).
>
>There are lower-memory options. https://eprint.iacr.org/2020/1260
>estimated that a quantum version of its enumeration algorithm would be
>faster than quantum sieving for the sizes we're talking about here. The
>first concrete analysis of the cost of the main quantum subroutine in
>this attack is https://eprint.iacr.org/2023/1623.
Thanks for links, I have not read them yet but I assume they might require much larger quantum computers than the ones breaking RSA and ECC. I think it would be very nice with some kind of classical supercomputer graphs like [5] taking memory into account. For many attacks, memory is not a limiting resource, but for classical attacks on lattice systems my understanding is that memory is likely to be a limiting resource.
Cheers,
John Preuß Mattsson
[1] https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf
[2] https://eprint.iacr.org/2008/390.pdf
[3] https://eprint.iacr.org/2015/767.pdf
[4] https://www.cryptrec.go.jp/symposium/2023_cryptrec-eval.pdf
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20231120181741.880602.qmail%40cr.yp.to.