Hi Dan,
I’m sorry that I don’t have time to address every point of your essay.
Below I have picked out a few points that seem most relevant to the
discussion of domain separated hashes in the CCA2 conversion.
> On Dec 7, 2018, at 8:17 PM, D. J. Bernstein <d...@cr.yp.to> wrote:
>
> Focusing here on comparing the following two responses to multi-target
> attacks:
>
> * Response #1: include a key prefix in the hash used in the CCA2
> conversion, a long enough prefix that prefixes of user keys don't
> collide.
>
> * Response #2: "aim for a very high single-target security level, and
> then rely on the fact that T-target attacks gain at most a factor
> T".
> Mike Hamburg writes, regarding Response #2:
>> overdesigning this way is unappealing
> Sure, 2x sounds significant, but that's for an archaic 2^64 pre-quantum
> security level. The gap rapidly shrinks as the target security level
> increases. Do you really claim that 1.25x is "overdesigning”?
> Response #2, unlike Response #1, is a complete solution to the problem:
> assume that there will be multi-target attacks, and choose the security
> level to protect against the maximum possible effect of those attacks.
> This eliminates the hard problem of evaluating multi-target security.
> The security evaluator can safely focus on single-target attacks.
I would further categorize response #2 into two categories.
Response #2a: Nobody actually needs Class V security as NIST
recommends it (2^256 classical work even to find one target of many).
The difference between 256 bits and any realistic attack is enough to
cover any likely multi-target attack.
This is how I would categorize NTRU Prime’s multi-target defense.
You describe this as if “aim[ing] for a very high single-target security
level” is exclusive of Response #1, or somehow obviates it. But this
isn’t true. The security margin between 2^256 and reality is there to
counter many possible threats, and multi-target attacks are only one
of them. You also have to consider improvements in lattice reduction,
lack of tightness in the FO transform (almost universal in the QROM),
lack of tightness in higher-level protocols using the KEM, and other
threats. As a result, almost every submission “aim[s] for a very high
single-target security level”.
Response #2b: If we want 2^256 multi-target security, then aim for
2^320 single-target. I didn’t notice any schemes doing this — some
made higher security claims or estimates, but I didn’t see anyone
claiming this as their multi-target defense.
And yes, I would call this “overdesigning”. Overdesigning can be a
virtue, and I see that is the purpose of Class V security in the first place:
it’s a hedge against unknown (expected or unexpected) attacks.
But adding 64 extra bits of security against lattice attacks, specifically to
counter an attack that probably doesn’t apply in full, seems unappealing
to me, because it comes at a significant performance cost.
> [ regarding Response #1: ]
>> Each feature like this adds complexity and may hurt performance, while
>> possibly improving security.
>
> There's a metric missing from this list, namely the complexity of a
> comprehensive _review_ of security. One way to see the importance of
> this metric is as follows.
Response #1 is domain-separating the hash function. This is a
long-standing security technique and has formal, informal and
practical benefits:
* Formally, in other contexts, it’s often required in security proofs.
Formally, it has a clear benefit in security proofs for KEMs.
* Informally, it means that if an attacker is computing lots of hashes, he
has chosen one single target per hash. This reduces the set of attacks
that must be considered.
* Practically, it is part of several systems’ defense against CCA attacks,
and the lack of this measure contributed to borderline-feasible CCA
attacks on other systems, all in this very standardization process.
I am aware that NTRU Prime isn’t one of those systems.
Furthermore, domain separation is really simple. It costs no bandwidth,
little to no extra memory, very few extra cycles (tens to thousands
depending on whether it pushes you into a new hash block), very little
code, no extra DPA concern, and little complexity in the spec. And it
clearly doesn’t harm security.
As a result, I simply do not agree that domain separation is a mere
“complication” — on the contrary, I think it helps to focus analysis.
I recognize that full, tight, end-to-end proofs of the effectiveness of
domain separation for KEMs (or all systems that contain KEMs? Is
that the requirement??) are not currently known, and may never be
known, but I expect that domain separation will figure in proofs of
security for components of the system (or perhaps the entire KEM).
> I understand how Response #1 can feel like progress. I've even advocated
> an analogous complication---but that was in a situation where
>
> * it was easy to see how the complication eliminated multi-target
> attacks against a complete end-user cryptographic system;
> * this turned into a simple, easily reviewable, proof of multi-target
> security for the complete system; and
> * the target security level was only 2^128 pre-quantum.
Are you referring to Ed25519 here? You advocated domain-separating
the hash in 2011 by saying:
“…; the use of [the public key] A [in the hash] is an inexpensive way to
alleviate concerns that several public keys could be attacked
simultaneously;”.
This sounds an awful lot like my informal justification for hashing the
seed above, rather than foresight that it would “[turn] into a simple,
easily reviewable, proof of multi-target security for the complete
> This KEM hash is a very different story. It is, at best, addressing a
> much smaller part of a system that has many other multi-target
> weaknesses. I doubt that anyone has ever constructed a similar system
> with a proof of multi-target security, even if a proof is possible for
> this small piece. Furthermore, we're talking about post-quantum KEMs
> aiming for a much higher security level, so the performance impact of
> Response #2 is relatively small.
I don’t understand this argument either. It’s a smaller ratio than at
lower security levels, but surely it has a higher absolute performance
impact?
>> NTRU Prime’s evaluation technique is already less conservative than
>> other systems’ — they would call its parameter set class II or III
>> instead of V. Just saying that you can divide that by T=2^64 and
>> still be OK is unconvincing
>
> Let me see if I understand, quantitatively, what you're claiming here.
> You're starting from 2^155 pre-quantum security for Streamlined NTRU
> Prime 4591^761, dividing by 2^64 for a 2^64-target attack, and saying
> that 2^91 is not OK? Or you're doing something similar starting from
> 2^140 post-quantum security?
Quantitatively, the requested specification of Class V is 2^256 effort to
break one key or ciphertext out of many.
Starting from your estimate of 2^256 single-target security, which is based
on your best estimates of real performance of real attacks currently known
today, you cannot simply subtract 64 bits and claim that you have met
the specification. Instead, you can argue that the specification is silly,
and have done, but this doesn’t help an auditor who wishes to determine
whether you’ve met that specification.
Furthermore, let’s consider that for realistic systems, Class III security
(multi-target, as specified) is certainly enough. As I understand it this
is really what you’re arguing here anyway when you say that Class V
multi-target is silly. If an auditor wishes to determine that you’ve met Class
III, then he or she will need to either:
* Evaluate the possibility of multi-target attacks -- the evaluation will likely
be easier if the hash is domain-separated; or
* (Response #2) Check that your claimed Class V instance actually
achieves its claimed 2^256 single-target security, and therefore reaches
Class III multi-target.
Response #2 is easier for an auditor to accept if your system includes
some security margin, instead of being designed to hit exactly 256-bit
security against currently-known attacks. It is therefore more consistent
with a system that has a security margin with respect to its claims, than
with one that is designed without a security margin.
NTRU Prime is different from other proposals in that it includes only
one parameter set (per problem type), with little margin between the
best estimate for the security of that parameter set and Class V. I’m
not arguing that this is a bad decision, only that it doesn’t make the
multi-target evaluation easier. It also seems inconsistent with your
strategy to “accurately evaluate the costs of the attacks” and then
“choose the highest-security parameters that users can afford”, since
different users can afford different parameters.
[…]
> Of course underestimating security levels has the general effect of
> increasing security parameters. This could protect users against future
> attack speedups. You call this "conservative", suggesting that it's a
> good approach. I have two questions and a comment:
I am not advocating for underestimating security, and I hope that my
argument above clarifies this. That is a completely separate topic of
conversation.
> * How do you think this approach is different from "overdesigning",
> which a moment ago you called "unappealing"? What's the principle
> for deciding what's "overdesigning" and what's "conservative”?
It is overdesigning. Overdesigning by an additional 64 bits, just to avoid
including any multi-target defenses, is what’s unappealing.
Cheers,
— Mike