I strongly support the comment that Scott Fluhrer made at the microphone during the NIST PQC workshop last week; “footguns” or “ease of mis-implementation” ought to be considered as an axis for security analysis of candidate schemes. CVEs don’t (frequently) happen because of mathematical attacks; they happen when a non-cryptographer is handed a complex spec to implement, and nobody on their team has the expertise needed to notice a subtle deviation from the spec during code review. Simplicity of the spec leads to fewer vulnerabilities.
I would extend Scott’s sentiment to cover both the ease of implementing the primitive itself, and the ease of designing / implementing secure protocols and modes of operation around the primitive.
- - -
Mike Ounsworth | Software Security Architect
Entrust Datacard
I strongly support the comment that Scott Fluhrer made at the microphone during the NIST PQC workshop last week; “footguns” or “ease of mis-implementation” ought to be considered as an axis for security analysis of candidate schemes. CVEs don’t (frequently) happen because of mathematical attacks; they happen when a non-cryptographer is handed a complex spec to implement, and nobody on their team has the expertise needed to notice a subtle deviation from the spec during code review. Simplicity of the spec leads to fewer vulnerabilities.
I would extend Scott’s sentiment to cover both the ease of implementing the primitive itself, and the ease of designing / implementing secure protocols and modes of operation around the primitive.
- - -
Mike Ounsworth | Software Security Architect
Entrust Datacard
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/c879c485d970480890076e9eaab4a7d3%40PMSPEX05.corporate.datacard.com.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/2393d683-236c-0444-5ae9-aab53055b235%40uwaterloo.ca.
I’ll give a couple of examples where there is a difference
The first example is the error vectors in Lattice Based schemes:
The ‘avoid-foot-cannon’ criteria would prefer (3) over (2) and (2) over (1).
Another place where it may come up is with error conditions (or other low-probability cases); schemes that necessarily need to do some logic on an error in order to be secure is less attractive than another method which sticks to the main logic to do things right.
The third place is the issue of CPA KEMs vs CCA KEMs. One concern I have with a CPA KEM is that, while we know that means that you generate fresh keys for every negotiation, we might run into a clever implementor that tries to optimize things by reusing keys for multiple sessions. This is Yet Another foot cannon (even if it is not within the primitive itself); but it does open the question as to whether NIST would want to approve CPA at all…
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAC7JQK17ALJyN4pK8ZuCV9QgJwwtRnag9g7WY59ZUFMg52mNMg%40mail.gmail.com.
> an email to pqc-...@list.nist.gov
> <mailto:pqc-...@list.nist.gov>.
> To view this discussion on the web visit
> https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/c879c485d970480890076e9eaab4a7d3%40PMSPEX05.corporate.datacard.com
> <https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/c879c485d970480890076e9eaab4a7d3%40PMSPEX05.corporate.datacard.com?utm_medium=email&utm_source=footer>.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/2393d683-236c-0444-5ae9-aab53055b235%40uwaterloo.ca.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-...@list.nist.gov.
CCA KEMs can provide FS; you just generate a fresh key each time (or alternatively periodically, e.g. once a minute).
Now, a CPA KEM may be cheaper than a CCA KEM in this scenario; we’d need to consider: a) how much cheaper (both in computation time and bandwidth), b) how much does this cost delta matter to implementations, and c) how does that compare to the safeness we get by sticking with CCA.
The answers to these questions are considerably harder (and will depend on the primitive we’re talking about); however (IMHO) they need to be asked…
To unsubscribe from this group and stop receiving emails from it, send an email to
pqc-forum+...@list.nist.gov.
To view this discussion on the web visit
https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/27a467d7-d270-48e2-ab3d-c4161a7e5e48%40list.nist.gov.
I’m not aware of any Round 2 KEM that could conceivably fall into your (1) there (all of the signature schemes do).
And it’s plausibly as easy if not easier to screw up category 3 IMO. Rounding is possibly more sensitive doing something stupid in the form of, e.g. rounding off one fewer bit (in order to fit it into your constrained device or for whatever reason) than specified.
And in particular, most of the schemes do rounding off as well (regardless of whether they add additional error) for reducing ciphertext sizes. Even here it’s not so clear.
To view this discussion on the web visit
https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/BL0PR11MB3172B0DF1C2B027873F11AB2C1A30%40BL0PR11MB3172.namprd11.prod.outlook.com.
I agree with Jacob that simpler distributions do not necessarily
prevent implementation blunders.
Even something as simple as the uniform distribution can be
screwed up, as illustrated by countless ECDSA-related examples
(like this recent one presented at the WAC2 workshop before
CRYPTO: https://ia.cr/2019/023).
I think we need to come up with systematic measures to prevent bad
implementation regardless of the error distribution used.
To some extent, deterministic signatures can be such a measure (as
pointed out in https://ia.cr/2019/023), but I would not go as far
as making them mandatory.
Statistical tests could be another one: there already exist
standardized tests for testing uniform randomness, so it makes
sense to define analogous tests for other distributions (for
example, something like this for Gaussians:
https://ia.cr/2017/438).
Thomas
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/6937C97C-ACD2-413E-91A9-74AA08DB8FB9%40nist.gov.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20190829234246.24682.qmail%40cr.yp.to.
Given the code as it actually exists (and thinking through possibilities
for further implementations), one can reasonably say that a _subroutine_
is shared, but this sharing does _not_ imply that the overall chance of
error is the same, or even that it's close. One has to analyze the
chance of error inside the subroutine _and_ the chance of error outside
the subroutine.
I think I’m biased because I often develop for hardware, and we don’t
necessarily have randombytes() available within the simulator or
FPGA. But it’s easy to run a test cases through, so long as you’re
checking input -> output mappings.
Build an implementation using randombytes(). Plug in a randombytes()
implementation that does "read this input". This converts checking KATs
into an example of "checking input -> output mappings". This has nothing
to do with what RNG APIs are "available within the simulator or FPGA";
it's another type of test for the original implementation.
Of course there are ways that KATs can fail. For example, suppose an
implementor tries running a series of tests as follows:
( echo keygen; echo enc; echo dec ) \
| while read operation
do
ssh masterdevice ./runtest $operation \
| cmp - $operation.kat
done
Will the implementor notice that this is actually testing only keygen,
and skipping the tests of enc and dec?
In exactly this situation, say there's a bug in seed expansion. With the
usual design choice of uncompressed seeds, the tests will catch these
bugs, because the bugs will affect the keygen output. With your design
choice of compressed seeds, the tests will _fail_ to catch these bugs,
because the bugs won't affect the keygen output.
I also don't think it's safe to assume that one side of the enc-dec is
always a preexisting implementation.
This is turning into another debate where we agree on the basic points,
and that in some axis A > B > C, but you write volumes of text on why
B is very far from A
No. Beyond the concrete data points, you've made a pile of claims
regarding whether various bugs
* have a "slight" probability of appearing (how much is "slight"?),
* "mostly" can't happen (how often is "mostly"?),
* are caught by "minimal testing" (how much is "minimal"?),
* are caught by "reasonable tests" (which tests are "reasonable"?),
* are caught by "comprehensive tests" (how much is "comprehensive"?),
* come from "complex" subroutines (how do we judge "complex"?),
* are "hard to control" in specs (how do we judge "hard"?),
etc. The pervasive lack of clarity makes it unnecessarily difficult to
figure out what exactly you're claiming and what your evidence is, and
makes any underlying errors unnecessarily difficult to correct.
For example, in reference to "copying randomness where they should be
using new randomness", you claimed that this would be "unlikely to pass
reasonable tests". I asked whether the round-1 Dilithium tests were
"reasonable". You didn't answer. I can't figure out whether
* you think the round-1 Dilithium tests weren't "reasonable", or
* you think the tests were "reasonable" but the round-1 Dilithium
software security disaster was one of the supposedly "unlikely"
cases not caught by such tests. (How often is "unlikely"?)
Meanwhile you seem convinced that matrices of polynomials aren't any
more error-prone than single polynomials.
This isn't an "agree to disagree" situation. It's a "one side asking
for clarification and the other side refusing to clarify" situation.
Readers might also observe that there aren't many candidates using
matrices of polynomials; that you have one of these, modulo polynomials
vs. bigints; that your main reason for jumping into this discussion
seemed to be to deny that the added matrix layer increases the chance of
bugs; and that, when your position was challenged, you evaded questions
and resorted to issuing accusations of bias, dragging this thread into
the mud, which might discourage discussion here but in the end isn't
going to make security holes _less_ likely to happen.
Furthermore, as usual you are replying to each one of my points with
two or three of your own
Have you considered the possibility that the shorter analysis is
oversimplified?
As an analogy, recall that at one point you painted a simple picture of
bandwidth efficiency and tried to dismiss smooth parameter spaces as
"fine-tuning".
Analyzing the actual quantitative information took much
more work (https://cr.yp.to/papers.html#paretoviz), showing among other
things that your picture was oversimplified and that smooth parameter
spaces have a _larger_ effect than various issues that you highlighted.
I would prefer to see you skip the ad-hominem attacks and respond to the
technical points. If you don't have time for a detailed analysis, simply
say so, rather than trying to make it sound as if people carrying out
more detailed analyses are doing something wrong.
This is ridiculous and you know it.Also, you chose a particular reading of Peter’s claim that made it false.So, instead of asking whether the short element and the noise are
Consider that in the specification of ThreeBears, and also in each of my
implementations of it, the short key elements and noise are sampled by
calling the function “noise”.
sampled by "the same routine" as he wrote, you're asking merely whether
these two routines have _some_ overlap of code? This is content-free:
e.g., the SABER code that creates noise by rounding uses some of the
same CPU instructions as the code that creates short elements.
Peter wrote that "Most (?) LWE schemes use the same routine for sampling
this value and for sampling the noise." You're the one claiming that
this criterion means something different from
{short elements} = {key-noise vectors} = {ciphertext-noise vectors}
but then what exactly do you think it _does_ mean? Again the lack of
clarity makes analysis unnecessarily difficult.
If these issues "happen" to look good for a design that I'm involved in,
perhaps that's because I've put quite a lot of thought into these issues
over the past twenty years, as documented in various talks and papers
starting long before NISTPQC. Perhaps this allocation of time, in turn,
comes from my seeing these issues as critical for the end users and
worthy of careful analysis.
Each loop here is another chance for Dilithium-type security disasters.