Hi Dan,
If your concern is really entirely about complexity, then I can see why
my comments might not have mitigated it. However, I also have a
concern about the gap between what’s studied and what the proofs
assume, which I had thought was closely related to yours. My concern
involves the strength of the assumption as well as its complexity. I will
explain why this reduction partially mitigates my concern.
By the way, this thread is retreading somewhat ground we already
covered in 2019: see eg my July 30 2019 message, using the notation
“OW-Conf-semiclassical” instead of "OW-CPA-scChk”. Still, perhaps it
is interesting to go over it again.
I agree that OW-CPA-scChk is more complicated than OW-CPA, or
even IND-CPA. However, it is a weaker assumption than IND-CPA, it
is qualitatively (in my opinion) more similar to OW-CPA, and it is more
likely to allow amplification from low-probability attacks to slower but
high-probability attacks. (Though this isn’t guaranteed because
Kyber isn’t randomly self-reducible.) Also it is further reducible to
vanilla OW-CPA, though with a further tightness loss. Since my concerns
were related to the strength of the assumptions as well as their
complexity, that OW-CPA-scChk is weaker was relevant to me.
In my opinion, moving from IND-CPA to OW-CPA-scChk reduces the
risk from a subtle low-probability distinguisher appearing, analogous to
the “attacking AES with MD5” example which (in a completely different
context) you and Tanja explored in your “Non-uniform cracks in the
concrete” paper. As you have pointed out, and as Micciancio-Walter
“On the bit security of cryptographic primitives” also points out, an
epsilon-probability distinguisher is really more of an epsilon^2
advantage in terms of how difficult it is to amplify or otherwise use.
This, in addition to the free precomputation, enables the theoretical
low-probability distinguishing attack on AES using MD5 even if no
other attacks on AES are feasible. I haven’t gone through the proof,
but I suspect that an epsilon-probability attack on OW-CPA-scChk is a
real O(epsilon) advantage in the M-W sense, in contrast to the IND
case. That is, I suspect that the strength of the assumption is linearly
related to epsilon instead of quadratically related.
Furthermore, while I am far from an expert in lattice attacks, I am under
the impression that in some cases distinguishers are easier than OW
attacks, and are even (in the case of the dual attack) used to construct
OW attacks.
Both of these make me more concerned about a loose reduction
to IND-CPA than one with the same looseness factor to
OW-CPA-scChk. Or, in other words, the OW-CPA-scChk reduction
mitigates my concern somewhat.
Regards,
— Mike
PS: the following may or may not mitigate any concerns, but I would
like to make it a bit more clear why there is a tightness loss in the
BHHHP proof as applied to Kyber, beyond the general difficulty of
tight PQ proofs. It could, of course, be because Kyber is weak. But
there is also a reason to expect such a tightness loss generically.
An OW-CPA attacker against a dPKE knows when it has won the
game. If it narrows the attack down to, say, 2^40 guesses, it can just
try encrypting each one of them, and then win when it finds the right
one. On a quantum computer, a Grover speedup may be applicable
in this step.
The same is not true for an rPKE. Because the encryption is
randomized, even if the attacker can narrow down the ciphertext to
a few guesses, it doesn't immediately win with probability 1. That’s
why IND-CPA is plausible at all for an rPKE. If the attacker has
2^40 guesses and can’t narrow it down further, then it wins the
OW-CPA game against rPKE with probability 2^-40.
As such, there is a natural obstacle to generically converting an
OW-CPA attack on a dPKE to an rPKE. This obstacle causes
some combination of looseness and correctness-checking oracles.
You can see the result with O(d) tightness loss and a semiclassical
checker oracle, or O(qd) tightness loss and no checking oracle.
While of course one would rather have a tight proof, it may still be
useful in directing future proofs and/or cryptanalysis to see where
the tightness loss comes from.