Classic McEliece

2,470 views
Skip to first unread message

D. J. Bernstein

unread,
Apr 25, 2025, 4:30:45 AMApr 25
to pqc-...@list.nist.gov
There were comments on pqc-forum last month pointing out that there are
errors in NIST IR 8545 regarding Classic McEliece. For example, the
report's 114 189 cost figure is inflated by 4x compared to reality, and
the report claims incorrectly that Classic McEliece sends more data than
BIKE for hybrid XML encryption and SAML SSO. (In fact, Classic McEliece
is by far the lowest-cost post-quantum option for those applications.)

https://blog.cr.yp.to/20250423-mceliece.html now lists further errors in
NIST IR 8545---in particular, but not exclusively, errors regarding
Classic McEliece.

Can NIST please issue errata? Thanks in advance.

---D. J. Bernstein (speaking for myself)
signature.asc

Moody, Dustin (Fed)

unread,
May 1, 2025, 9:56:32 AMMay 1
to D. J. Bernstein, pqc-forum
D. J. Bernstein,

Thank you for your feedback on our Round 4 report, NIST IR 8545. As noted in the document, the benchmark figures presented in the tables are intended to be representative, and NIST did not rely solely on those listed numbers when evaluating performance. We recognize that performance can vary significantly depending on factors such as platform, application, computing environment, and network conditions. We wanted to keep the report short, so we did not cite every source that we reviewed. The performance data on SUPERCOP has been very useful to us over the years. We have verified that the performance figures given in the tables in our report accurately reflect the benchmarks we cited. We stand by the conclusions and decisions outlined in the report.

Dustin Moody
NIST PQC



From: pqc-...@list.nist.gov on behalf of D. J. Bernstein
Sent: Friday, April 25, 2025 4:30 AM
To: pqc-forum
Subject: [pqc-forum] Classic McEliece

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20250425083017.917196.qmail%40cr.yp.to.

D. J. Bernstein

unread,
May 1, 2025, 11:38:03 AMMay 1
to pqc-...@list.nist.gov
'Moody, Dustin (Fed)' via pqc-forum writes:
> the benchmark figures presented in the tables are intended to be
> representative

NIST IR 8545 says it's presenting evaluations of the "fourth-round
candidates".

However, for Classic McEliece, the numbers in Table 5 of NIST IR 8545
are actually regarding software _from 2019_, ignoring software speedups
and benchmarks included in the third-round and fourth-round packages.

Tung Chou, the software author, also filed an official comment with NIST
(https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/koZWqTM714A/m/0dKXhAkTBQAJ)
in June 2020 on some big speedups. That was almost five years ago.

> We recognize that performance can vary significantly depending on
> factors such as platform, application, computing environment, and
> network conditions.

Those variations have nothing to do with the issue at hand. NIST is
claiming (e.g.) 114189 kcycles for mceliece348864f keygen on "x86_64".
A closer look shows that this is from an Intel Xeon Platinum 8259CL CPU.
The submitted fourth-round software is 4x faster than that on that CPU.

The network conditions and application are irrelevant. The problem is
NIST claiming benchmarks of the fourth-round candidate when in fact
NIST is talking about code from 2019, ignoring subsequent speedups.

> We have verified that the performance figures given in the tables in
> our report accurately reflect the benchmarks we cited.

The source "[1]" cited by NIST already had a prominent disclaimer at the
top in April 2024 saying that it was presenting out-of-date numbers from
a defunct profiling project:

https://web.archive.org/web/20240425050637/https://openquantumsafe.org/benchmarking/

Why was NIST taking numbers from this source in a March 2025 report and
suppressing any mention of the disclaimer?

It is, in any event, easy to test the fourth-round software and see that
the NIST IR 8545 numbers are highly inaccurate. This is now my third
time pointing out this particular error in NIST IR 8545; it's amazing
that the error still hasn't been fixed.
signature.asc

Oscar Smith

unread,
May 1, 2025, 3:54:36 PMMay 1
to pqc-forum, Moody, Dustin (Fed), D. J. Bernstein
> We have verified that the performance figures given in the tables in our report accurately reflect the benchmarks we cited. We stand by the conclusions and decisions outlined in the report.

Isn't this a non-sequetor? If you cited benchmarks that were bad (which you did), your conclusions and decisions would be incorrect even if your tables accurately reflected your citations.

D. J. Bernstein

unread,
Aug 5, 2025, 4:52:31 PMAug 5
to pqc-...@list.nist.gov
NIST IR 8545 claims, e.g., 114189 kcycles for mceliece348864f keygen.
That's ludicrously inaccurate: it's a statement about the fourth-round
Classic McEliece submission, but that submission reports 4x faster
keygen, and includes software achieving this speed.

Regarding _how_ NIST erred here, I wrote the following back in May:

> However, for Classic McEliece, the numbers in Table 5 of NIST IR 8545
> are actually regarding software _from 2019_, ignoring software speedups
> and benchmarks included in the third-round and fourth-round packages.

I retract this statement. In fact, NIST IR 8545's wrong numbers arose
from a different, even more astonishing, evaluation error by NIST. In
the rest of this message, I'll first explain how claims by NIST led to
my previous diagnosis, and then I'll pinpoint what actually happened.

As background, NIST required the submission package for each submitted
algorithm to provide speed reports and complete software. NIST already
reported early in the first round that it was using the software to do
its own double-checks of the speed reports.

When NIST IR 8545 claimed to be reporting benchmarks of the submitted
algorithms, _obviously_ it was reporting benchmarks of the code from the
submission packages, right? After all, it would be glaringly improper to
substitute slower algorithms and misrepresent measurements of those as
the speeds of the submitted algorithms.

Given this basic constraint, the only way for NIST to report numbers 4x
too high for fourth-round Classic McEliece keygen (without some drastic
failure in cycle counting) would be to rewind to the code from the 2019
submission, back before a ~4x speedup from Tung Chou.

This was an easy explanation matching the observations available at that
point. This explanation also fits the fact that NIST sent email in 2024
saying 128 bytes rather than 96 bytes for mceliece348864 ciphertexts.

Procedurally, I complained about NIST IR 8545 having very wrong numbers;
NIST didn't respond. I complained again; NIST stonewalled. I complained
again; NIST didn't respond. I invoked a formal procedure that forces a
response; the response was mostly stonewalling (and took three months),
but did specifically address the second-round-vs.-fourth-round point:
"NIST disagrees with this assertion. NIST is confident that the
benchmarks cited from Reference [1] accurately reflect the fourth-round
version of Classic McEliece."

The numbers in NIST IR 8545 remain wildly inaccurate. NIST's declaration
of confidence doesn't magically make the numbers correct. But one gets
the feeling that NIST's underlying mistake wasn't as simple as
conflating the round-2 and round-4 speeds.

So I looked more closely. NIST's numbers are from a source that says it
used liboqs-0.9.0-rc1. Focusing again on mceliece348864f, I compared the
code in

liboqs/src/kem/classic_mceliece/pqclean_mceliece348864f_avx2

to what was actually submitted in round 4.

First observation: The code isn't the same. So there's no reason to
believe that the liboqs speed should match the speed of the submission.
I also see nowhere that liboqs makes any claim of a match.

Second observation: When I run

git clone https://github.com/open-quantum-safe/liboqs.git
cd liboqs
mkdir build
cd build
cmake -GNinja ..
ninja
tests/speed_kem Classic-McEliece-348864f

on a Skylake (Ubuntu 22.04, gcc 11.4), even without rewinding to
liboqs-0.9.0-rc1, I see around 127 Mcycles for keygen.

Third observation: At least one of the code changes is _obviously_ a
slowdown. Namely, instead of the fast int32_sort() from the submitted
code, liboqs substitutes a much slower (in particular, non-vectorized)
sorting algorithm. If I do

git clone https://github.com/open-quantum-safe/liboqs.git
cd liboqs
wget https://cr.yp.to/2025/20250805-oqs-348864f-patch.txt
patch -p1 < 20250805-oqs-348864f-patch.txt
mkdir build
cd build
cmake -GNinja ..
ninja
tests/speed_kem Classic-McEliece-348864f

to copy the submitted int32_sort() code back into liboqs, I see around
40 Mcycles on Skylake for keygen. That's still behind

https://lib.mceliece.org/speed.html

reporting 32 Mcycles on Skylake, but it's not in the same ballpark as
the insanely wrong numbers from NIST IR 8545.

To be clear, I'm not saying liboqs did something wrong here: liboqs has
many goals other than speed, and presumably the code changes are related
to some of those goals. But NIST _definitely_ did something wrong: NIST
IR 8545 claims to report benchmarks of the Classic McEliece submission,
but in fact it's reporting benchmarks of a slower algorithm.

There are endless ways to verify that the submission is much faster than
what NIST IR 8545 claims. This is my fifth time asking NIST to correct
its factually inaccurate report.
signature.asc

Loganaden Velvindron

unread,
Aug 8, 2025, 3:47:17 AMAug 8
to Moody, Dustin (Fed), D. J. Bernstein, pqc-forum
Dear Dr Moody,

Thank you for taking the time to publicly comment on this issue. To resolve
this issue, is there a way to have an "updated" version of NIST IR 8545 ?

I'm still seeing research projects using Classic Mceliece like those ones:
https://eprint.iacr.org/2025/1179.pdf

John Mattsson

unread,
Aug 8, 2025, 4:38:00 AMAug 8
to D. J. Bernstein, pqc-...@list.nist.gov

Hi Dan,

You are likely correct that the numbers in NIST IR 8545 are incorrect. While the numbers for XML encryption and SAML SSO are important, I’m not sure that key generation numbers matter much, since the main use case for Classic McEliece appears to be static keys.

It is somewhat surprising to see you criticize both the IETF and NIST for lack of transparency while being active in ISO. In my view, and many others, the NIST cryptography team is doing excellent work. NIST maintains public mailing lists for discussion, asks for public comments, actively encourages open dialogue around its standardization, publishes evaluation reports like NIST IR 8545, and makes its standards publicly available. This is a far more open process than the CAESAR competition—and worlds better than ISO. ISO not only lacks public discussion but actively forbids it, and its standards are locked behind paywalls. This makes ISO cryptography standards more of a cybersecurity risk than a cybersecurity enabler.

I would still like to see the CFRG specify Classic McEliece. For static keys, I think it’s a very good complement to ML-KEM.

Cheers,
John

--

You received this message because you are subscribed to the Google Groups "pqc-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Daniel Apon

unread,
Aug 8, 2025, 7:36:55 PMAug 8
to John Mattsson, D. J. Bernstein, pqc-...@list.nist.gov
"It is somewhat surprising to see you criticize both the IETF and NIST for lack of transparency while being active in ISO. In my view, and many others, the NIST cryptography team is doing excellent work. NIST maintains public mailing lists for discussion, asks for public comments, actively encourages open dialogue around its standardization, publishes evaluation reports like NIST IR 8545, and makes its standards publicly available. This is a far more open process than the CAESAR competition—and worlds better than ISO. ISO not only lacks public discussion but actively forbids it, and its standards are locked behind paywalls. This makes ISO cryptography standards more of a cybersecurity risk than a cybersecurity enabler.

I would still like to see the CFRG specify Classic McEliece. For static keys, I think it’s a very good complement to ML-KEM."

Very well said, John.

Speaking for myself,
--Daniel Apon


D. J. Bernstein

unread,
Aug 10, 2025, 2:54:54 PMAug 10
to pqc-...@list.nist.gov
John Mattsson writes:
> You are likely correct that the numbers in NIST IR 8545 are incorrect.

"Likely"? It's now completely clear what happened:

* liboqs, via pqclean, took the optimized round-4 mceliece*/avx code
but drastically slowed it down, most importantly by replacing its
fast sorting code with much slower sorting code.

* Some liboqs web page had measurements of the much slower code,
correctly labeled as measurements of "OQS version 0.9.0-rc1".

* NIST IR 8545 cited that web page but falsely labeled the results
as measurements of the round-4 Classic McEliece submission.

Instead of posting about what's "likely", why don't you just try running
the actual code and seeing how insanely inaccurate NIST's numbers are?

The script below measures the cycles for the submitted 348864f keygen
code on 64-bit Intel/AMD CPUs with AVX2. Recall that NIST had asked
submitters to provide "an AVX2 (Haswell) optimized implementation" for
measuring performance of the submissions. Trying this script on an Intel
Core i7-4765T (Haswell) under Debian 13 (gcc 14.2), I see around 34
million cycles, plus or minus about a million, on several runs.

This is marginally faster than what the round-4 documentation said for
Ubuntu 18.04 with an older version of gcc; of course, one could install
that older version for reproducibility.

This is nowhere near the 114 million cycles falsely claimed in NIST IR
8545. (The gap is actually even larger: NIST quietly switched from its
designated Haswell comparison platform to Cascade Lake, a bit faster
than Haswell.)

Again, NIST's 114 million cycles are actually from an algorithm that was
drastically slowed down by a third party. For comparison, NIST said
again and again that it was going to evaluate the _submitted_
algorithms, and that it _did_ evaluate the submitted algorithms:

* NIST solicited submissions "from any interested party for
candidate algorithms".

* NIST's regulations for the project pointed to "criteria that will
be used to appraise the candidate algorithms".

* NIST said that it "will perform a thorough analysis of the
submitted algorithms in a manner that is open and transparent to
the public, as well as encourage the cryptographic community to
also conduct analyses and evaluation".

* NIST IR 8545 is the round-4 project report and claims to describe
"the evaluation and selection process of these fourth-round
candidates".

* The report specifically says that the speed tables are for these
submissions: "Tables 3 through 5 show representative benchmarks
for key generations, encapsulations, and decapsulations of BIKE,
HQC, and Classic McEliece, respectively. Each row is a specific
parameter set from the corresponding submission."

NIST's stonewalling includes writing that "performance can vary
significantly depending on factors such as platform, application,
computing environment, and network conditions". Oh, also claiming that
NIST is free to exempt any publication from NIST's information-quality
standards simply by labeling the publication as a NIST IR.

---D. J. Bernstein (speaking for myself)


# "Submitters may assume that these libraries are installed"
# (see also NIST's pqc-forum email dated 30 Aug 2017 14:19:29 +0000
# guaranteeing specifically <libkeccak.a.headers/SimpleFIPS202.h>)
cd
git clone https://github.com/XKCP/XKCP.git
cd XKCP
git checkout 56ae09923153c3e801a6891eb19e4a3b5bb6f6e2 # October 2022 version
time make AVX2/libXKCP.a
time make AVX2/libXKCP.so
mkdir -p $HOME/include
mkdir -p $HOME/lib
ln -s $HOME/XKCP/bin/AVX2/libXKCP.a.headers $HOME/include/libkeccak.a.headers
ln -s $HOME/XKCP/bin/AVX2/libXKCP.a $HOME/lib/libkeccak.a
ln -s $HOME/XKCP/bin/AVX2/libXKCP.so $HOME/lib/libkeccak.so
export CPATH="$CPATH:$HOME/include"
export LIBRARY_PATH="$LIBRARY_PATH:$HOME/lib"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$HOME/lib"

# Cycle-counting library (or just install libcpucycles-dev on Debian)
cd
wget -m https://cpucycles.cr.yp.to/libcpucycles-latest-version.txt
version=$(cat cpucycles.cr.yp.to/libcpucycles-latest-version.txt)
wget -m https://cpucycles.cr.yp.to/libcpucycles-$version.tar.gz
tar -xzf cpucycles.cr.yp.to/libcpucycles-$version.tar.gz
cd libcpucycles-$version
./configure --prefix=$HOME && make -j8 install

# Now measure the round-4 Classic McEliece submission
cd
wget https://classic.mceliece.org/nist/mceliece-20221023.tar.gz
# or, same file: https://web.archive.org/web/20230108173322/https://classic.mceliece.org/nist/mceliece-20221023.tar.gz
# or, same file although the name is different: https://web.archive.org/web/20221120111204/https://csrc.nist.gov/csrc/media/Projects/post-quantum-cryptography/documents/round-4/submissions/mceliece-Round4.tar.gz
tar -xzf mceliece-20221023.tar.gz
cd mceliece-20221023/Additional_Implementations/kem/mceliece348864f/avx
sed -i 's/-lkeccak/-lkeccak -lcpucycles/' build
sed -i '1i#include "cpucycles.h"' nist/kat_kem.c
sed -i '/ret_val = crypto_kem_keypair/ ilong long t = cpucycles();' nist/kat_kem.c
sed -i '/print.*PUBLIC/ ifprintf(stderr,"keygen %lld\\n",cpucycles()-t);' nist/kat_kem.c
make
./run; ./run; ./run; ./run; ./run; ./run; ./run
signature.asc

Moody, Dustin (Fed)

unread,
Aug 11, 2025, 10:35:54 AMAug 11
to D. J. Bernstein, pqc-forum

As stated in NIST IR 8545, the benchmark tables were intended to provide representative data points, not authoritative or exhaustive metrics. To that end, we cited Open Quantum Safe because it provided a consistently structured and well-documented dataset that served our goal of presenting a fair, representative comparison across submissions.  We accurately reproduced the numbers reported there.  We did not disregard SUPERCOP; it was among the sources we reviewed during the evaluation process.  We are of course also aware of the numbers reported in the submission documents and took those into account.  Not every number, data point, or result that we evaluated made its way into the Report, as it was a summary.  The key generation cycle counts for Classic McEliece did not play a significant role in the rationale for not selecting it for standardization.  
 
Dustin Moody
NIST PQC



From: pqc-...@list.nist.gov on behalf of D. J. Bernstein
Sent: Sunday, August 10, 2025 2:54 PM
To: pqc-forum
Subject: [EXTERNAL] Re: [pqc-forum] Classic McEliece

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

D. J. Bernstein

unread,
Aug 11, 2025, 3:53:04 PMAug 11
to pqc-...@list.nist.gov
Moody, Dustin (Fed) writes:
> As stated in NIST IR 8545, the benchmark tables were intended to
> provide representative data points,

Here we go again. Representative of _what_?

The numbers NIST used aren't for the round-4 submission: they're for
much slower software (and now we know that it's software that was slowed
down by a third party). But NIST labels them as benchmarks of the
round-4 submission. This is data falsification by NIST.

> not authoritative

No, the report had no such disclaimer, nor would such a disclaimer be
relevant to the problem at hand, namely NIST fabricating data.

> or exhaustive metrics.

Irrelevant. Table 5 uses the typical speed metrics (keygen cycles, enc
cycles, dec cycles), and presents misinformation from NIST regarding the
performance of the round-4 Classic McEliece submission in those metrics.

> To that end, we cited Open Quantum Safe because it provided a
> consistently structured and well-documented dataset that served our
> goal of presenting a fair, representative comparison across
> submissions.

No, that reference did not cover the round-4 Classic McEliece
submission. NIST promised that it would evaluate the "submitted
algorithms"; that reference presented measurements of a much slower
third-party algorithm; ergo, it was wrong for NIST to use that data.

We now know exactly what created the bulk of the slowdown: namely,
liboqs removed the fast sorting subroutine in the AVX2-optimized round-4
Classic McEliece code, and replaced it with much slower sorting code.
But, even without this specific knowledge, NIST had a responsibility to
check whether it was looking at evaluations _of the submission_, rather
than evaluations of something else.

The page cited by NIST reports that it's measuring OQS version
0.9.0-rc1. NIST suppressed that information, and falsely claimed that
these were benchmarks of the round-4 submission.

Regarding fairness, reporting slowed-down numbers for just _one_ of the
submissions would have been glaringly unfair even if NIST had openly
admitted the substitution. But NIST _hid_ the substitution. That's what
turns the unfairness into outright falsification.

There are other reasons NIST obviously shouldn't have used this data set.
For example:

* NIST's information-quality standards require measurements to be
accompanied by quantitative indications of variability. This data
set flunks that requirement, as does Table 5 of NIST IR 8545.

* NIST had asked teams for "an AVX2 (Haswell) optimized
implementation". This data set covers only a few CPUs, _not_
including Haswell. NIST has been repeatedly waving at platform
differences as supposedly explaining Table 5; that isn't true (the
numbers that NIST took from this data set are for Cascade Lake,
which is _faster_ than Haswell), but _seeing_ that it isn't true
is extra work for readers. NIST should have stuck to its announced
comparison platforms instead of suddenly complicating the picture.

But the really big problem here is NIST pretending that the numbers are
for the submission, when in fact the numbers are for a much slower
third-party algorithm.

> We accurately reproduced the numbers reported there.

If a data point says "The observed temperature in D.C. at 23:59 on 8
August 2025 was 67 degrees", manipulating it to say "The observed
temperature in D.C. at 23:59 on 8 August 2025 was 73 degrees" is data
fabrication. Manipulating it to say "The observed temperature in
Anchorage at 23:59 on 8 August 2025 was 67 degrees" is also data
fabrication. One can't defend this by saying that the 67 was copied
correctly. The data point includes the number _and_ the statement of
what the number was observing.

NIST took information from a source that said it was measuring "OQS
version 0.9.0-rc1". NIST removed the original labeling of these numbers
as measurements of "OQS version 0.9.0-rc1". NIST pretended that these
were measurements of the round-4 Classic McEliece submission. This is
NIST fabricating data, and more specifically NIST falsifying data. The
misconduct here is similar to, e.g., the case reported in

https://retractionwatch.com/2025/03/21/osaka-dental-university-fabricated-data-investigation/

("Because images in the other two articles were identical to images in
the 2014 paper, the university determined the data in the later papers
were fabricated").

> We did not disregard SUPERCOP; it was among the sources we reviewed
> during the evaluation process. We are of course also aware of the
> numbers reported in the submission documents and took those into
> account.

To clarify, you're saying that NIST _saw_ that the numbers in Table 5
were much larger than the numbers from the submission and from SUPERCOP?
How exactly are you claiming that NIST took this into account?

The public evidence is of NIST IR 8545 not saying anything about this
gigantic gap. The most charitable explanation is that NIST looked at
only one source, somehow misunderstood what that source was measuring,
and recklessly took those numbers without checking any other sources
(such as the speed table provided with the submission).

It's much worse if NIST looked at multiple sources and _knew_ that it
was taking an outlier while suppressing the numbers from other sources.
A 4x gap in cycle counts is screaming "we're not measuring the same
software".

> Not every number, data point, or result that we evaluated made its way
> into the Report, as it was a summary.

"We operate transparently. We've shown all our work" (source:
https://web.archive.org/web/20211115191840/https://www.nist.gov/blogs/taking-measure/post-quantum-encryption-qa-nists-matt-scholl)

> The key generation cycle counts for Classic McEliece did not play a
> significant role in the rationale for not selecting it for
> standardization.

NIST's report fails to quantify the weights placed on individual
comparison factors. This gives NIST the freedom to respond to _any_
error by claiming that the error wasn't "significant" and that fixing
the error wouldn't have changed NIST's decisions.

Such claims lack credibility: why is NIST putting information into a
report in the first place if the information doesn't matter? These speed
tables are prominently placed on page 6 of a 34-page report. They're the
only justification that the report provides for various statements in
the text, such as the claim that Classic McEliece keygen is "three
orders of magnitude more costly than HQC".

The report also isn't limited to looking at past decisions, and isn't
limited to looking at NIST's decisions. In particular, even though the
report fails to recognize the existing deployments of Classic McEliece
documented on https://mceliece.org, the report does admit that there's
an ongoing process of multiple parties at least _considering_ Classic
McEliece (to quote the report: "After the ISO standardization process
has been completed, NIST may consider developing a standard for Classic
McEliece based on the ISO standard"). It's irresponsible for NIST to be
feeding misinformation into future decisions.

In the end, claiming that NIST's falsified data hasn't had an impact
doesn't remove NIST's obligation to issue a correction. As

https://www.ams.org/about-us/governance/policy-statements/sec-ethics

puts it, we have a responsibility to "correct in a timely way or to
withdraw work that is erroneous".
signature.asc
Message has been deleted

Jacob Alperin-Sheriff

unread,
Aug 11, 2025, 7:37:02 PMAug 11
to pqc-...@list.nist.gov
I am also very concerned with another issue in NIST IR 8545. 

NIST is selecting public-key cryptographic algorithms through a public, competition-like process to specify additional digital signature, public-key encryption, and key-establishment algorithms”

As I recall, NIST was very clear in early rounds that it wasn’t a competition. When did this change? Why? Questions abound! 

-Jacob Alperin-Sheriff


--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Andrey Jivsov

unread,
Aug 12, 2025, 9:18:09 AMAug 12
to pqc-...@list.nist.gov
I am afraid to touch hot potatoes, but I don't think this is a mental illness. Accuracy is important; people can make mistakes; it's important to correct mistakes. Many organizations and their employees assume that everything that NIST publishes is factually true, and one reason for this is openness of processes run by NIST. I think that the number of cycles is a fairly objective metric, so if an errata is warranted, I think that NIST should publish it somehow.

On Mon, Aug 11, 2025 at 4:24 PM Daniel Apon <dapon....@gmail.com> wrote:
I know I’m recommended not to reply directly to these manifesto posts, but I’m compelled to say:

This is mental illness.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

dustin...@nist.gov

unread,
Aug 12, 2025, 12:06:53 PMAug 12
to pqc-forum, Daniel Apon
Daniel,

Your post was out of line, and I am going to delete it from the forum.  

A reminder to always keep comments on the pqc-forum polite, civil, and focused on the important issues.  

Dustin Moody
NIST PQC

On Monday, August 11, 2025 at 7:24:05 PM UTC-4 Daniel Apon wrote:
I know I’m recommended not to reply directly to these manifesto posts, but I’m compelled to say:

This is mental illness.

On Monday, August 11, 2025, D. J. Bernstein <d...@cr.yp.to> wrote:
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
Reply all
Reply to author
Forward
0 new messages