The authors mention early in the paper that an alternative to repeated
hashing is to try to decode "undecodable" syndromes essentially by brute
force exploring of its neighborhood in code space. That is, repeatedly
adding some number of random columns of the parity check matrix until one
obtains a decodable syndrome. For some unknown reason the authors give up
this approach, but it seems to me that this is just as efficient as the
repeated hashing approach, and if we explore the neighborhood of the
syndrome systematically instead of randomly, we would get a signature scheme
whose signature length and signing time are bounded by a function of the
maximum distance of the Goppa code.
Actually, I'm not sure "maximum distance" is the right terminology, but you
probably know what I mean: distance to the closest codeword in the worst
case. My question is, how can we compute this distance for a Goppa code?
here is a weight distribution has min and max distances in it;
not Goppa
http://magma.maths.usyd.edu.au/magma/Examples/node114.html
Is this scheme still unbroken? I find the arguments by the authors
rather unconvincing.
If nothing has been published so far, I'll go over my notes and check
if something
substantial can be found.
Daniel Bleichenbacher
It's still unbroken, at least according to this recent review:
http://eprint.iacr.org/2006/162
A Summary of McEliece-Type Cryptosystems and their Security
D. Engelbert, R. Overbeck and A. Schmidt
I also didn't find any attacks using a search of Google Scholar.
And to partially answer my own question, apparently the correct terminology
for what I'm looking for is "covering radius". I'm now searching the
literature with the right term...
Ok, I think I have a simple attack that requires about 2^57 hash
computations.
I don't have much time to check this again right now. If you are
interested I could send
you my notes.
Please do, I'm certainly interested.
I sent a private request but did not get a response. If you already sent the
notes, it probably got lost due to spam filters. Can you please try again
using this web form here: http://ibiblio.org/weidai/mailme.php, or just post
your notes in this newsgroup. Email has become so unreliable now, sigh...
should work. It takes about 2^45 memory and 2^57 hash computations to
forge
a signature. Unfortunately he also said he's not planning to publish
the
paper due to certain reasons.
And to answer my own question that started off this thread, I found a
paper
[1] that shows the covering radius of a Goppa code to be bounded by
2t+1
which is the same as its minimum distance. So a variant of the CFS
signature
scheme that uses systematic search instead of rehashing (as I suggested
in
the opening post) would still have a maximum signature length that is
approximately twice its average signature length (and a maximum signing
time
that is exponential in the average signature length).
The existence of Bleichenbacher's attack and the lack of a better bound
on
its maximum signature length makes the CFS signature scheme rather less
attractive than it appeared initially. The construction of an n-bit
signature scheme with close to n-bit security still seems to be an open
problem.
[1] Françoise Levy-dit-Vehel and Simon Litsyn, "Parameters of Goppa
codes
revisited" http://www.ensta.fr/~levy/goppaweb.ps
"daniel bleichenbacher" <daniel_ble...@yahoo.com> wrote in
message
news:1166350313....@80g2000cwy.googlegroups.com...