new quantum cryptanalysis of CSIDH

814 views
Skip to first unread message

Christopher J Peikert

unread,
Jun 18, 2019, 4:54:42 PM6/18/19
to pqc-forum
Greetings all,

This message does not concern a NIST PQC candidate, but I expect that
it will be of interest to the forum because it relates to the recent
post-quantum candidate CSIDH ("Commutative SIDH", pronounced "sea
side").

We cryptanalyze CSIDH using a variant of Kuperberg's quantum
"collimation sieve." Our analysis shows, for example, that CSIDH-512
key recovery can be done with only about 2^16 quantum evaluations of
the group action (on a uniform superposition) and about 2^40 bits of
classical memory that is quantumly accessible.

The full abstract appears below, and the paper is at
https://web.eecs.umich.edu/~cpeikert/pubs/csidh-sieve.pdf (and should
appear on eprint soon).

Sincerely yours in cryptography,
Chris

Recently, Castryck, Lange, Martindale, Panny, and Renes proposed
\emph{CSIDH} (pronounced ``sea-side'') as a candidate post-quantum
``commutative group action.'' It has attracted much attention and
interest, in part because it enables noninteractive
Diffie--Hellman-like key exchange with quite small
communication. Subsequently, CSIDH has also been used as a foundation
for digital signatures.

In 2003--04, Kuperberg and then Regev gave asymptotically
subexponential quantum algorithms for ``hidden shift'' problems, which
can be used to recover the CSIDH secret key from a public key. In
2013, Kuperberg gave a follow-up quantum algorithm called the
\emph{collimation sieve} (``c-sieve'' for short), which improves the
prior ones, in particular by using exponentially less quantum memory
and offering more parameter tradeoffs. While recent works have
analyzed the concrete cost of the original algorithms (and variants)
against CSIDH, there seems not to have been any consideration of the
c-sieve.

This work fills that gap. Specifically, we generalize Kuperberg's
collimation sieve to work for arbitrary finite cyclic groups, provide
some practical efficiency improvements, give a classical (i.e.,
non-quantum) simulator, run experiments for a wide range of parameters
up to and including the actual CSIDH-512 group order, and concretely
quantify the complexity of the c-sieve against CSIDH.

Our main conclusion is that the proposed CSIDH-512 parameters provide
relatively little quantum security beyond what is given by the cost of
quantumly evaluating the CSIDH group action itself (on a uniform
superposition). The cost of key recovery is, for example, only
about~$2^{16}$ quantum evaluations using~$2^{40}$ bits of quantumly
accessible \emph{classical} memory (plus insignificant other
resources); moreover, these quantities can be traded off against each
other. (This improves upon a recent estimate of~$2^{32.5}$ evaluations
and~$2^{31}$ qubits of \emph{quantum} memory, for a variant of
Kuperberg's original sieve.) Therefore, under the plausible
assumption that quantum evaluation does not cost very much more than
indicated by a recent ``best case'' analysis, CSIDH-512 does not
achieve the claimed 64 bits of quantum security, and it falls well
short of the claimed NIST security level~1 when accounting for the
MAXDEPTH restriction.

D. J. Bernstein

unread,
Jun 18, 2019, 8:54:07 PM6/18/19
to pqc-forum
Pointing out various content problems and a historical problem.

Christopher J Peikert writes:
> under the plausible assumption that quantum evaluation does not cost
> very much more than indicated by a recent ``best case'' analysis,
> CSIDH-512 does not achieve the claimed 64 bits of quantum security

Peikert's "a more prudent estimate would be closer to 40 + 16 = 56"
calculation (page 16 of the paper), and his <64 corollary, are actually
based on very optimistic assumptions for the attacker. For example:

* Peikert claims that [BLMP19] estimated about "2^40 nonlinear qubit
operations". [BLMP19] actually says about "2^40 nonlinear bit
operations", which is not the same thing. [BLMP19] explains that
this implies a quantum cost of at most 14 times as many T-gates
(assuming free NOT etc.) by generic conversions. Probably 14 can be
improved in non-generic ways, but assuming 1 is unjustified.

* The same generic conversions require around 2^40 qubits, vastly
more than the resources mentioned by Peikert. [BLMP19] outlines a
non-generic way to reduce this to the scale of 2^20 qubits but at
the cost of another factor 4 in the number of T-gates. Peikert
ignores this factor 4.

* Peikert also ignores the following [BLMP19] quote regarding other
overheads: "Furthermore, even if enough qubits are available,
simply counting qubit operations ignores critical bottlenecks in
quantum computation. Fault-tolerant quantum computation corrects
errors in every qubit at every time step, even if the qubit is
merely being stored; see Appendix A.5. Communicating across many
qubits imposes further costs; see Appendix A.6. It is thus safe to
predict that the actual cost of a quantum CSIDH query will be much
larger than indicated by our operation counts. Presumably the gap
will be larger than the gap for, e.g., the AES attack in [28],
which has far fewer idle qubits and much less communication
overhead." (Working out the full costs here will require analyzing
the parallelizability of quantum isogeny computations.)

* Peikert also makes very optimistic assumptions regarding the cost
of QRACM. I haven't seen papers that come up with such low costs
after analyzing plausible physical realizations of QRACM. See,
e.g., https://arxiv.org/pdf/1502.03450.pdf to understand some of
the obstacles.

The first three of these points are reasons to question the "evaluation"
cost assumptions that Peikert makes, either implicitly or explicitly.
The fourth point shows that the abstract's list of assumptions is
incomplete, and gives reasons to question the missing assumption.

Beyond all of the above, there's an assumption that allowing uniform
group elements doesn't cost much extra. I agree that _this_ assumption
sounds plausible, but it would still be good to replace the error-prone
guesswork with quantification and analysis of the exact quantum costs.

> While recent works have analyzed the concrete cost of the original
> algorithms (and variants) against CSIDH, there seems not to have been
> any consideration of the c-sieve.

One of the invited talks at the February 2019 workshop on "Quantum
algorithms for analysis of public-key crypto" at the American Institute
of Mathematics was a talk by Kuperberg. In the talk, Kuperberg mentioned
the application to isogenies, made clear that his 2011 algorithm (same
"c-sieve" algorithm where Peikert cites the final 2013 paper) superseded
the previous algorithms, and explained how to quantify this.

This in turn prompted at least two publications with "consideration of"
Kuperberg's 2011 algorithm in the isogeny context:

https://quics.umd.edu/events/hidden-shift-algorithms-20
(March 2019 abstract from another talk by Kuperberg)

https://cr.yp.to/talks/2019.05.21/slides-djb-20190521-qisog-4x3.pdf
("How many queries do these attacks perform? 2011 Kuperberg
supersedes previous papers.")

The second publication was the talk accompanying the Eurocrypt 2019
paper "Quantum circuits for the CSIDH" that Peikert cites.

To be clear, I'm not saying that Kuperberg did all the work to come up
with optimized concrete numbers for this context. However, there's
clearly a huge overlap between what Peikert is doing here and what
Kuperberg already did. The paper doesn't give adequate credit.

---Dan
signature.asc

daniel.apon

unread,
Jun 18, 2019, 9:17:18 PM6/18/19
to pqc-forum, d...@cr.yp.to
Hi Dan,

Leaving aside for the moment (what appears to be) attribution/credit question(s) w.r.t. the new write-up, what is your best-take on the bits X of quantum security attained by CSIDH (accounting for the union of this new paper and prior art)?

In short: What is X in your view?
(A number, a range, or an estimated number or range are all fine.)

Thanks much,
--Daniel

daniel.apon

unread,
Jun 18, 2019, 9:19:22 PM6/18/19
to pqc-forum, d...@cr.yp.to
Clarifying: CSIDH-512.

Christopher J Peikert

unread,
Jun 19, 2019, 10:43:12 PM6/19/19
to pqc-forum
> Christopher J Peikert writes:
> > under the plausible assumption that quantum evaluation does not cost
> > very much more than indicated by a recent ``best case'' analysis,
> > CSIDH-512 does not achieve the claimed 64 bits of quantum security
>
> Peikert's "a more prudent estimate would be closer to 40 + 16 = 56"
> calculation (page 16 of the paper), and his <64 corollary, are actually
> based on very optimistic assumptions for the attacker. For example:

Time will tell how optimistic the assumptions are, but a few brief points:

1. The above estimate is proposed explicitly as a matter of prudence;
it is not affirmatively claiming a 2^56 quantum attack.

2. Dan omitted "depending on how the cost of QRACM is modeled in
relation to other resources" from the above quote, which relates to
this:

> * Peikert also makes very optimistic assumptions regarding the cost
> of QRACM. I haven't seen papers that come up with such low costs
> after analyzing plausible physical realizations of QRACM. See,
> e.g., https://arxiv.org/pdf/1502.03450.pdf to understand some of
> the obstacles.

3. For implementing the quantum oracle, one reason for some optimism
is given in the "Further Research" section: "because the c-sieve
requires so few oracle queries (e.g., 2^16 for CSIDH-512), some
immediate improvement may be obtainable simply by increasing the error
probability of the oracle, from the 2^{-32} considered in [BLMP'19]."

In other words, the question is how much the quantum circuits for
CSIDH(-512) can be improved by raising the error probability to a
still-tolerable level.

Lastly, I have no idea what Dan finds objectionable about this:

> > While recent works have analyzed the concrete cost of the original
> > algorithms (and variants) against CSIDH, there seems not to have been
> > any consideration of the c-sieve.

"Consideration" is clearly referring to "analyz[ing] the concrete
cost," as was done for earlier algorithms. The prior sentence says
that the c-sieve "improves the prior ones [that break CSIDH], in
particular by using exponentially less quantum memory and offering
more parameter tradeoffs," so it obviously *applies* just as well to
CRS/CSIDH key search. However, the only available analysis was
asymptotic, not concrete.

(See also the introduction, which says that Kuperberg's c-sieve
"subsumes his original one and Regev's variant," and "has been briefly
cited in some of the literature, [but] its implications for concrete
CSIDH parameters appear not to have been considered yet. That is the
question we address in this work.")

If anyone else thinks this "doesn't give adequate credit," I genuinely
would like to know. Otherwise, I'll look forward to a retraction of
the allegation, and a discussion centered on technical matters instead
of bad-faith parsing.

D. J. Bernstein

unread,
Jun 20, 2019, 1:07:36 PM6/20/19
to pqc-forum
Page 1 of eprint 2019/725 claims that "CSIDH-512 does not achieve the
claimed 64 bits of quantum security" under the "plausible assumption
that quantum evaluation does not cost very much more than indicated by a
recent 'best case' analysis".

This claim is not justified. The paper's argument for this conclusion is
actually based on implausible assumptions. More precisely, the argument
relies on

(1) implausible assumptions not admitted in the claim,
(2) implausible assumptions mischaracterized as plausible, and
(3) one plausible assumption.

For example, the paper implausibly assumes low-cost quantum access to
RAM. Reasons to disbelieve this particular assumption appear, e.g., in
https://arxiv.org/pdf/1502.03450.pdf, and in the literature justifying
NIST's dismissal of BHT, and in the literature justifying SIKE's switch
to smaller key sizes. (Will 2019/725 be extended, I wonder, to excitedly
announce that SIKE round 2 doesn't meet its claimed security level?)

This is an example of an implausible assumption not admitted in this
claim. The claim refers only to assumptions regarding the cost of
"quantum evaluation" ("quantumly evaluating the CSIDH group action"),
and unjustifiably omits the paper's assumptions regarding the rest of
the algorithm, such as the assumption of low-cost quantum access to RAM.

Other sentences of 2019/725 state the assumption of low-cost quantum
access to RAM, but this doesn't justify omitting the assumption here,
and doesn't make the assumption plausible. Page 4 of the paper points to
a way to replace this (implausible) assumption by other (implausible)
assumptions, but this still doesn't justify omitting the assumption.

I'm not saying that the assumptions regarding the cost of "quantum
evaluation" are plausible. On the contrary, this paper makes an
implausible series of jumps between cost models---ignoring all of the
relevant literature, including warnings in the introduction of [BLMP19].
One of these jumps comes from 2019/725 actively misstating [BLMP19]'s
conclusions as follows:

[BLMP19] analyzed the concrete cost of quantumly evaluating the CSIDH
group action. For the CSIDH-512 parameters, they arrived at an
estimate of approximately 2^40 nonlinear qubit operations ...

The actual quote from [BLMP19] regarding a number close to 2^40 is
"nonlinear bit operations", which is not the same thing as 2019/725's
claimed "nonlinear qubit operations". As explained in textbooks and
reviewed in [BLMP19], one has to pay for reversibility, and for building
nonlinear bit operations from T-gates. 2019/725 doesn't give reasons to
think that these overheads can be eliminated in this context; I haven't
found anything in 2019/725 even acknowledging that the issue exists.

This particular gap between nonlinear bit operations and T-gates is at
most a factor 14. However:

(1) Despite 2019/725's overconfident claim of beating 64 bits, and
calculation of 56 bits, the author says that the paper "is not
affirmatively claiming a 2^56 quantum attack". It isn't clear
what exactly the gap below 64 bits is supposed to be, but clearly
the gap isn't large, so a few bits can destroy the conclusion.

(2) There's a huge additional cost for fault tolerance, and
presumably this will be an even bigger issue for isogeny
computations than it is for Grover attacks against AES, SHA-2,
etc. See the [BLMP19] quote from my previous message.

The wording of the claim's "plausible assumption" draws the reader's
attention to one fragment of this assumption, namely the gap between a
"best case" group distribution and a uniform distribution. This is where
[BS18] estimates a 2^2 slowdown. It's plausible that the actual slowdown
is even smaller. But the claim states a broader assumption than this,
incorrectly says that the broader assumption is plausible, and, as
explained above, unjustifiably omits another implausible assumption.

Christopher J Peikert writes:
> Time will tell how optimistic the assumptions are

Right now a reasonable reader who takes the sentence

Therefore, under the plausible assumption that quantum evaluation
does not cost very much more than indicated by a recent "best case"
analysis, CSIDH-512 does not achieve the claimed 64 bits of quantum
security, and it falls well short of the claimed NIST security level
1 when accounting for the MAXDEPTH restriction

from page 1 of 2019/725 has no idea that, _beyond_ the assumption stated
here, the paper is _also_ making an assumption of low-cost quantum
access to RAM.

Is it conceivable that the papers shredding this assumption are all
missing some amazing idea for how to build low-cost QRAM? Sure. But
2019/725 deceived this reader into thinking that this isn't in the list
of assumptions.

Appealing to the process of evaluating assumptions is rather strange as
a response to the problem of misinformation regarding which assumptions
are being used. This misinformation damages the process and needs to be
fixed.

Furthermore, the assumption that _is_ stated here is much broader than
the plausible part of it that's highlighted (the best-case-vs.-uniform
switch). The paper omits several caveats stated in [BLMP19], ignores
existing analyses such as https://eprint.iacr.org/2016/992.pdf, and
makes an unsupported plausibility claim that actively deceives readers
regarding what's known on this topic.

> For implementing the quantum oracle, one reason for some optimism
> is given in the "Further Research" section: "because the c-sieve
> requires so few oracle queries (e.g., 2^16 for CSIDH-512), some
> immediate improvement may be obtainable simply by increasing the error
> probability of the oracle, from the 2^{-32} considered in [BLMP'19]."

This is a baffling response.

I'm saying that the published claim identified above misleads readers in
two ways: it hides implausible assumptions, and mischaracterizes other
implausible assumptions as plausible. This response is saying that the
claim of a security level below 64 bits for CSIDH-512 might be correct
for other reasons. How is this supposed to justify misleading readers
regarding the plausibility of the paper's assumptions?

Furthermore, anyone who actually reads [BLMP19] can see that, contrary
to what 2019/725 indicates, [BLMP19] systematically considers three
different error probabilities: 2^-1, 2^-32, and 2^-256. The fastest
algorithm in [BLMP19] is reported to use 106 iterations, 154 iterations,
and 307 iterations respectively; see page 29 of the paper. Given that
2^-1 uses almost 70% as many iterations as 2^-32, it's not reasonable to
hope for a big gap between 2^-16 and 2^-32.

Furthermore, [BLMP19] points to the software online for doing these
calculations. Simply running the "top2exact" script as per the
instructions shows that the probability reaches 2^-16 at 138 iterations,
which is 89.6% of the iterations used to reach 2^-32. In other words,
this "further research" was already done as part of [BLMP19], and
produces a difference in security levels of 0.11 bits. This is a rather
thin thread upon which to hang hopes of rescuing 2019/725's <64 claim.

-----

The rest of this message switches to the failure of 2019/725 to give
adequate credit to Kuperberg.

https://quics.umd.edu/events/hidden-shift-algorithms-20 clearly shows
that, before this paper, Kuperberg was not merely _considering_ but
publicly _recommending_ his 2011 algorithm, in particular in the context
of attacking isogenies:

These algorithms became more interesting when Childs, Jao, and
Soukharev showed that they yield a quantum algorithm to find
isogenies between elliptic curves. I will discuss my lesser known
second algorithm, which deserves more attention because it supersedes
my original algorithm as well as Regev's algorithm. The newer
algorithm has a better constant in the exponent, it is expensive only
in classical space and not quantum space, and it is tunable in
various ways.

Section 4.5 of Kuperberg's 2011 paper explains the critical steps in a
concrete analysis, and his first paper already explains how to build
simulators for this type of algorithm. Detailed tuning takes some work,
but 2019/725 will obviously lead readers to miscredit 2019/725 for more
fundamental work that is actually due to Kuperberg.

As a separate matter, I think it's fair to say that quite a few of us
learned tuning and analysis details from Kuperberg's talks beyond what
we learned from his 2011 paper. I'm skeptical of the idea that the
recent work on this topic was independent of this.

> > > While recent works have analyzed the concrete cost of the original
> > > algorithms (and variants) against CSIDH, there seems not to have been
> > > any consideration of the c-sieve.
> "Consideration" is clearly referring to "analyz[ing] the concrete cost,"

You could have written "there seems to have been no concrete analysis of
the c-sieve". Instead you switched to saying that there wasn't even any
"consideration" of the c-sieve. This broadens the clear literal meaning,
and the switch of wording suggests that this broadening was deliberate.

Perhaps this isn't what you meant. But, instead of trying to weaponize
arguable ambiguities as a way to evade responsibility for predictably
misleading readers into giving your work more credit than it deserves,
perhaps you could try making an effort to _eliminate_ ambiguities and
_prevent_ readers from being misled? I'm making this suggestion with all
due respect, and I hope you understand its constructive spirit.

I don't see where your paper is acknowledging that Kuperberg not only
_considered_ but also _recommended_ his 2011 algorithm to attack
isogenies. I also don't see where your paper is acknowledging the
overlap between its concrete analysis and the analysis that's already in
Kuperberg's papers.

---Dan
signature.asc

Christopher J Peikert

unread,
Jun 27, 2019, 2:09:46 PM6/27/19
to pqc-forum
Greetings,

following comments from several people, I have posted an updated
version of my paper https://eprint.iacr.org/2019/725 that concretely
analyzes the quantum security of CSIDH using Kuperberg's collimation
sieve.

In summary, the updated version does the following:

1. It discusses the independent (and largely complementary) analysis
of the collimation sieve by Bonnetain and Schrottenloher.

2. It gives a precise cost in T-gates and qubits for implementing
quantumly accessible classical memory (QRACM, also known as QROM) with
ordinary RAM, using the method from Section III.C of
https://arxiv.org/pdf/1805.03662.pdf , as suggested by Schanck. The
bottom line is that for the parameters of interest, the QRACM
complexity is dwarfed by that of the oracle calls (under current
estimates for the latter).

Importantly, this falsifies Bernstein's assertion that "the paper
implausibly assumes low-cost quantum access to RAM. Reasons to
disbelieve this particular assumption appear, e.g., in
https://arxiv.org/pdf/1502.03450.pdf ..."

The 2015 paper cited by Bernstein analyzes the "bucket brigade"
design. The 2018 work cited in my paper uses a different design, and
says the following:

'A notable difference between this paper and most previous work on
QRAM [63, 66–68] is that we describe the cost of QROM in terms of a
fault-tolerant cost model: the number of T gates performed and the
number of ancilla qubits required. Under such cost models, the “bucket
brigade” QRAM design of Giovannetti et al. [63, 66] has T complexity
(and thus, also time complexity under reasonable error-correction
models) of O(L) regardless of the fact that it has depth O(log L)
because implementing it as an error-corrected circuit consumes O(L) T
gates and O(L) ancillae qubits. Our implementation of QROM consumes
only 4L T gates and log L ancillae, which is a constant-factor
improvement in T-count and an exponential improvement in space usage
over the construction of Giovannetti et al.'

3. It gives quantum T-gate estimates for a full attack on CSIDH-512,
under the (sole) plausible assumption that the oracle complexity is
not much more than what is obtained in the "best conceivable case"
analysis of [BLMP'19]. The final T-gate count is below 2^64.

The CSIDH paper does not does not explicitly state the metric by which
its "64 bits of quantum security" claims should be evaluated. However,
its Section 7.3 and Table 1 analyze "number of qubit operations,"
which are frequently separated into Clifford gates and much more
expensive T-gates (e.g., [BLMP'19] does this). Under this standard
interpretation, the attack falsifies the claimed 64 bits of security,
and it strongly invalidates the claimed NIST security level 1 when
accounting for the MAXDEPTH restriction.

Other security metrics, like "depth times width," can be considered as
well; see, e.g., https://eprint.iacr.org/2019/103 . There is reason to
believe that a careful analysis under this metric might yield
something close to 2^64, but at the moment it is unclear, and we leave
the question to future work.

Christopher J Peikert

unread,
Jun 27, 2019, 2:56:05 PM6/27/19
to pqc-forum
PS: I don't mean to suggest that the "bucket brigade" design is
necessarily a bad choice of QRACM implementation for the collimation
sieve.

The "obstacles" that Bernstein refers to, citing
https://arxiv.org/pdf/1502.03450.pdf , are for algorithms that make a
large number of QRACM queries. However, the collimation sieve makes
only a small constant number (4) of QRACM queries per collimation
step, and (for sieve parameters of interest against CSIDH-512) does
only a moderate number of collimation steps, e.g., 256 or 128.

As the paper says on pages 1-2:

"It is important to distinguish between algorithms such as quantum
matrix inversion [9, 10] or quantum machine learning [11–14] that only
require a number of queries polynomial in n, and those such as quantum
searching [5] that require a number of queries super-polynomial in n.
In the former case, as will be seen, the maximum qRAM gate error rate
tolerated by the algorithms scales polynomially in n, and qRAM quantum
error correction may not be required. In the latter case, which is the
one that we concentrate on in this paper, the maximum tolerable qRAM
error rate scales super-polynomially in n and quantum error correction
is needed."

D. J. Bernstein

unread,
Jun 27, 2019, 7:05:05 PM6/27/19
to pqc-...@list.nist.gov
Christopher J Peikert writes:
> Other security metrics, like "depth times width,"
[ ... ]
> future work

In other words, the new version 2 of eprint 2019/725 does _not_ claim to
have an algorithm to attack CSIDH-512 with <2^64 qubit error-correction
steps. The author thus insists on

* switching to some other metric that's below 2^64,

* claiming that the other metric is the "standard" way to evaluate
security, and

* trying to convince the reader of this by pointing to the use of the
other metric in [BLMP19],

while continuing to suppress [BLMP19]'s warnings about the problems with
this metric.

> Importantly, this falsifies Bernstein's assertion that "the paper
> implausibly assumes low-cost quantum access to RAM. Reasons to
> disbelieve this particular assumption appear, e.g., in
> https://arxiv.org/pdf/1502.03450.pdf ..."

I stand by everything I said. The content and timeline show that "the
paper" mentioned in the quote is version 1 of eprint 2019/725. As I
said, that paper assumes, implausibly, low-cost quantum access to RAM.
Also, page 1 of the paper deceives readers into believing that the paper
_isn't_ making this assumption.

As far as I know, that paper has not been withdrawn, and has not been
the subject of an erratum. Issuing a new version of a paper does not
imply acknowledgment and correction of the errors in the original paper.

Furthermore, the new version seems to have limited its main claims
compared to the paper I was referring to. It seems quite unlikely that
the new claims imply the old claims, and I don't see such an implication
stated. (I would expect the paper to much more prominently analyze the
retreat---admitting how and why the claims have changed.)

Most importantly, the original paper's use of this assumption to derive
its conclusions is a simple historical fact that cannot be retroactively
"falsified", even if at some point other papers manage to obtain the
same conclusions without the assumption.

Failing to correct errors in papers after they have been pointed out is
in violation of basic standards of scientific ethics. Furthermore, if we
were discussing the NIST Post-Quantum Cryptography Standardization
Project (which is what I thought this mailing list was for), then the
"maturity" of analysis would be one of the official evaluation criteria,
and having a clear record of errors would be important input.

For comparison, Dent's well-known https://eprint.iacr.org/2002/174.pdf
begins with a detailed acknowledgment of errors in previous versions.
Such data points are tremendously helpful. It would be much harder to
analyze and extrapolate the frequency of errors if every error were
buried under incessant weaseling.

---Dan
signature.asc
Reply all
Reply to author
Forward
0 new messages