Thanks to Mike for his comments. Here are a few followup comments.
1. I agree that, for cryptanalysts analyzing the impact of decryption
failures, the critical question is how to make a decryption failure
happen. (If you can't make a decryption failure happen then you can't
distinguish the PKE from a modified PKE that magically always works.)
Of course the attacker isn't given the secret key.
Some of the KEMs have proofs that eliminate the need for cryptanalysts
to analyze this question. However, care is required in these proofs. As
far as I know, every correct proof of this type can be factored through
the HHK17 definition of failure probability, which has the subtlety that
the attacker _can_ look at the secret key.
There are ways to work with the HHK17 definition---some KEMs have no
decryption failures; some KEMs use the attacker-chosen message in a way
that doesn't affect the failure probability; maybe the probability can
be analyzed even in other cases---but we can't simply assume that people
are getting this right. As a code-based example,
https://www.ledacrypt.org/documents/LEDAcrypt_spec_latest.pdf
appeals to the HHK17 CCA theorems, but it actually uses a different
notion of failure probability, without commenting on the difference.
It's not clear to me that this proof gap can be fixed.
2. My paper focuses on specific KEMs that are proposed for widespread
deployment and that claim specific quantitative security levels. As far
as I can tell, none of the target KEMs claim a better T-target security
level than what's implied by the trivial T-factor bound.
In other words, if the single-target security goal is chance <=eps (at
whatever cost), then the T-target security goal is chance <=T*eps (at
the same cost modulo overhead for key generation etc.). There is no
additional risk of T-target attacks beating the T-target security goal,
beyond the risks of single-target attacks beating the single-target
security goal, which are the risks covered in my paper.
Hypothetically, if some KEM were to go out on an extremely shaky limb
and claim a better T-target security level (e.g., chance<=sqrt(T)*eps),
then this KEM would have an extra multi-target risk to analyze, and it
would be useful to make a table showing how proofs address various
components of this risk.
3. Regarding OW-Passive hardness assumptions, see below.
Mike Hamburg writes:
> I would say that OW-Passive on a dPKE isn’t necessarily a weaker
> assumption than IND-CPA or especially OW-Conf on an rPKE.
After reviewing the relationship between OW-Passive for a PKE and
IND-CPA for the same PKE, my paper says
This does not imply that assuming OW-Passive for one PKE is safer
than assuming IND-CPA for a _different_ PKE: the differences between
the PKEs can outweigh the gap between OW-Passive and IND-CPA.
What Mike is saying sounds like a special case of this, where the first
PKE is deterministic and the second is randomized. He's also considering
a variant of the same statement with OW-Conf in place of IND-CPA.
> The intuitive reason that randomized schemes typically incur a greater
> tightness loss when reducing from OW-Passive is, roughly, that OW-Passive is a
> harder problem for a randomized scheme than for a deterministic one.
I agree that known proof strategies have more trouble using OW-Passive
assumptions for randomized PKEs than using OW-Passive assumptions for
deterministic PKEs. (Otherwise every KEM would have "dist: safe".)
But I don't agree that OW-Passive assumptions for randomized PKEs are
generally harder to break than OW-Passive assumptions for deterministic
PKEs. This can go either way, depending on the PKEs; it's critical for
cryptanalysts to look at the details of both PKEs. The literature is
full of examples of broken PKEs of each type.
Of course an already-proven-to-be-broken hardness assumption is,
logically, a powerful tool in proofs. (False implies everything.) So we
have many examples of powerful hardness assumptions that are easy to
break---counterexamples to the idea that power of an assumption inside
proofs implies strength of the assumption. But we also have many
apparent counterexamples to the opposite extreme, the idea that power
implies weakness.
Mike points specifically to FO derandomization having security limited
by the message length. My reaction to "input length limits security" is
"make sure the inputs are long enough"; this doesn't justify any sort of
deterministic-vs.-randomized conclusion.
To be clear: I'm not saying that FO derandomization is safe. I find it
worrisome that there don't seem to be any papers from cryptanalysts
studying the security of FO derandomization. The proof picture allows
the output of FO derandomization to be much weaker than the original
PKE, even if one restricts attention to ROM attacks: HHK17 has a
Q-factor loss, and I'm not aware of anyone suggesting that better proofs
are possible. The main message from an attack here would be "make sure
that proof gaps are seen and addressed by cryptanalysts".
---Dan