> So, having looked through this table a bit, it appears that you estimate
> significantly worse security than the authors only for Emblem/R.Emblem.
I see the 2^68.90 "estimate" regarding EMBLEM-611, but I don't see a
clear statement of how the thing being estimated is supposed to be
related to the security of EMBLEM-611. Is there an attack that's
* claimed to break EMBLEM-611 and
* estimated to use 2^68.90 operations?
What exactly are the operations being counted, and what information is
available to justify using the estimate?
My impression is the 2^68.90 actually comes from
* taking a _small piece_ of a quantum attack against EMBLEM-611,
* dividing this piece into _expensive operations_,
* _asymptotically_ counting the number of these operations, and
* replacing o(1) in the asymptotics with 0.
Overall this looks to me like a massive underestimate of the actual cost
of the attack, even if we're super-optimistic about progress in quantum
computation. My impression is also that this cost gap varies from one
submission to another, so it's entirely possible for a submission that
has lower security against the same attack to have a higher number in
the table---which undermines the primary use of tables such as this.
Suppose a metric (for example, this one) is a wild guess that severely
underestimates the cost of an attack. The metric is then pushed to the
left in the table. This sorting makes the metric much more visible,
especially since there are so many columns in the table. Even if readers
take the time to look at the larger numbers, they'll generally use the
underestimates, since they'll incorrectly assume that the 14 columns are
reporting the costs of 14 different attacks and that the smallest column
is the security level against all known attacks.
Non-quantum estimates for EMBLEM-611 include 2^75.92 for "Core-Sieve"
and 2^141.74 for "Core-Enum+O(1)". Sorting by "Core-Sieve" I see the
following two systems listed at the top:
Core-Sieve Core-Enum+O(1)
EMBLEM n=611 75.92 141.74
uRound2.KEM n=500 83.74 125.93
These four numbers are telling completely different stories regarding
the security of EMBLEM-611 and uRound2.KEM-500. Is EMBLEM-611 hundreds
of times easier to break than uRound2.KEM-500, or thousands of times
harder? How do these compare to the "2^143 classical gates" that NIST
estimated for finding an AES-128 key (category 1)?
My impression is that the 75.92 and 83.74 are massive underestimates
(as above, except this time non-quantum). In the real world, public
experiments are passing 2^64 cycles and continue to show enumeration
beating sieving. Section 6.7 of the NTRU Prime submission identifies
serious errors in typical claims regarding the costs of sieving; for a
proper analysis one needs to look at the cost of hardware (offhand I'd
guess that the "Core-Sieve" shown above is using roughly 2^70 bits of
memory, while "Core-Enum" can fit inside a small GPU core) and then
account for communication costs and parallelism.
If there's a serious argument that sieving beats enumeration within
feasible cryptanalytic computations despite these effects, then this
argument should be presented in a paper for further analysis. Yes,
getting the numbers right is real work, but what's the alternative?
Saying that it's "conservative" to massively underestimate costs is
missing the point: if EMBLEM-611 is more secure than uRound2.KEM-500,
but a massive underestimate of costs is telling users to switch from
EMBLEM-611 to uRound2.KEM-500, then security has been reduced.
Of course a central database of lattice dimensions etc. is useful, but
this doesn't mean that the database should be fed through every
available "estimate" without regard to accuracy.
---Dan