Existing analysis suggests that it's hopeless to write down a complete
proof applicable to dimensions appearing in NISTPQC (<=1344). I noted in
https://twitter.com/hashbreaker/status/1142855190174928897 a claim made
years ago to a nap.edu committee that dimensions of "a few thousand"
would be enough; as far as I know, this claim remains unproven.
Christopher J Peikert writes:
> Then I guess you don't know about this October 2018 Masters thesis, which
> analyzes parameters based entirely on known worst-case reductions, and
> concludes that dimensions n = 1460, 1870 can suffice for 128-bit security
> against "best known" and "paranoid" quantum algorithms, respectively.
This thread is explicitly about the security of NISTPQC submissions...
If, on the other hand, the paragraph above includes an unstated move of
the goalposts, switching to 2^128 security for a _different_ problem
(not IND-CCA2, for example, or not Frodo), please clearly acknowledge
the change of subject.
On Thu, Jun 17, 2021 at 4:21 PM Christopher J Peikert
<cpei...@alum.mit.edu> wrote:
>
>
> I am the one who put forward that hypothesis to the NAP committee, in July 2017. The 2018 Masters thesis claims to establish it, with lots of room to spare: the dimensions are less than 1900. If that analysis is anywhere near correct, then it is not at all "hopeless to write down a complete proof applicable to dimensions appearing in NISTPQC (<=1344)."
>
> My hypothesis referred to the hardness of the LWE problem that can be concluded solely from worst-case hardness theorems, and not to any NISTPQC submissions or parameters. Indeed, it couldn't possibly do so, since the talk preceded the Round 1 deadline (Nov 2017).
But why didn't the e.g. Frodo submission cite the thesis, compute the
needed parameters from the worst-case hardness theorem, and propose
those?
What the Frodo submission actually does is cite theorems
reducing the BDDwDGS problem to LWE in section 5, and maybe I'm being
too dense right now, but I don't understand why that's helpful in
evaluating the difficulty of BDDwDGS. Seems to me one would like to
argue that LWE hardness implies BDDwDGS hardness.
If a submission actually did that work, it would improve the security
picture for that submission, and likely others. Without that work, the
theorem doesn't say anything relevant to the concrete security
parameters just as the existing CDH to DLOG reductions in groups
aren't that interesting.
1. There are theorems _loosely_ saying LWR >= LWE >= SIVP. (Replacing
SIVP by, e.g., BDDwDGS is essentially orthogonal to this message.)...
Furthermore, as the MQDSS security disaster
illustrates, sometimes attackers can exploit exactly the looseness that
allows a proof to be written down.
Thanks everyone for the insights. For concreteness, let’s focus on Frodo-640, 128-bit security against classical attack.
From proof’s perspective, Frodo-640 aims to use SIVP => LWE => IND-CPA. However, SIVP => LWE proof requires parameters that are *not* applicable for Frodo-640, so Frodo-640 has to use a *far less cryptanalyzed* problem BDDwDGS (compared to SIVP): BDDwDGS => LWE. Is it a fair summary? If it is then it’s not very assuring that Frodo-640's LWE with concrete Gaussian error parameters is safe even if we ignore all the running time looseness.
From cryptanalytic attacks, Frodo-640 uses SIVP => LWE. This has nothing to do with the hardness of BDDwDGS or reduction BDDwDGS => LWE.
I'm confused with the above methodology and I can't even convince myself that the methodology is logical.
Regards,
- Quan
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20210619153506.644871.qmail%40cr.yp.to.
Dan has raised a credible argument that loose (or inapplicable) proofs
may have degraded submissions.
To the extent that tighter proofs may
exist it is not necessarily the case that everyone knows about them--
e.g. Even in Peikert's case he states that he was unaware of the
particular work more recently discussed until near the round-3
deadline.
Would Frodo-- or some other candidate-- have had a
different design if the new techniques were known previously
To some extent I think Dan's general argument is hard to argue with,
at least the process needs to be mindful of uselessly loose proofs or
non-parameter-compensated proposals.
That is, unless you reject
Koblitz and Menezes well defended position that inappropriately
applied provable security has been shown to actively undermine
cryptosystem security in some instances.
The full strength version that submissions should be *penalized* for
using a weak argument (maybe even if later stronger ones were
discovered) is perhaps a bridge too far, but the science of high
integrity processes for selecting cryptosystems is far from
established and Dan's points are precisely the sort that should be
considered to help advance that field.
I appreciate the clarity of the new claim that "advantage-tight (but
poly-time-loose) reductions" are not "demonstrably risky".
To disprove the claim, I'll specify problems …
Problem X: Compute discrete logs in a group G(lambda) with generator
g(lambda), where the discrete logs are chosen uniformly at random mod
#G(lambda), and lambda is the security parameter.
Problem Y: Same except that the discrete logs are chosen uniformly at
random between 0 and E(lambda)-1 where E(lambda) <= #G(lambda) and the
ratio #G(lambda)/E(lambda) is Theta(lambda^10).
There was a clear claim that "advantage-tight (but poly-time-loose)
reductions" are not "demonstrably risky". This claim had no ill-defined
escape hatches. I wrote a detailed disproof of the claim. Where's the
retraction?
So you agree the theorems don't actually inform parameter setting that
directly because they don't describe the worst case attacker time
then?
We're interested in the actual security of a scheme, that is the best
possible attack. Now this is inherently unknowable. So what are we to
do? It seems to me that roughly there are four approaches:
1: We can have well studied schemes that people have put work into
analyzing as the basis of our schemes
2: We can argue from longevity
3: We can reduce the actual scheme to a well specified problem, that
is considered hard
4: We can argue that some analogous problems are hard, and that we've
avoided the problems from before.
The best case example of setting parameters ignoring reduction loss
would be Schnorr signatures. We prove it in the ROM, the forking lemma
costs us square root of our soundness. But then we say that no one has
come up with an attack that can exploit this in however many years,
and no one has any idea how to use it, and so we're justified in using
the hardness of the DLP to set parameters.
By contrast the LWE/SIVP reduction seems a bit fishy. We invoke it,
but then use arguments about the runtime of known LWE attacks to set
parameters. So what's the point of using the reduction, given that it
doesn't rule out a whole bunch of attacks we are worried about, if the
constants are sufficiently bad?
And given that we're not really using
the reduction to set the parameters, why is the looseness of LWR to
LWE more worrisome than LWE to SVIP?
- By contrast, I know of no examples where an advantage-tight but poly-time-loose reduction hid a significant security gap in a carefully proposed and cryptanalyzed system.