Francois Grieu
There's some interesting discussion about that failure here :
http://rdist.root.org/2010/11/19/dsa-requirements-for-random-k-value/
« Each key (for each different type of loader) seems to have an
associated random number ‘m’ — the numbers follow no pattern, but they
are consistent between different signatures on different versions of the
same loader — almost as if they treated ‘m’ as one of the parameters of
the key. Any idea what error in understanding might have caused that? »
If I understand correctly this means Sony decided they were smart enough
to reimplement their own version of ECDSA from scratch.
Famous last word.
This being said, the weakness of DSA/ECDSA with regard to weak random
generators is a real drawback of that algorithm that is usually very
little talked about. The above article contains some really interesting
discussion about that and how Sony are *not* the first ones to be bitten
hard by this. Not only fully predictable K are broken, but also the
cases where a few bits of K are predictable (once enough signatures have
been generated).
The DSA wikipedia entry compares this to the problems when you use RSA
badly, but AFAIK none of the errors you can do with RSA will reveal your
private key so IMO that problem is really at different scale.
If I recall correctly early Wii firmware would strcmp() the final r'
to r value when they did their ECDSA verify. Which of course led to
the search for r values with a leading zero.
From watching the CCC talk as hosted on vimeo it seems most of their
problems are broken into one of two camps
1. They trust things they shouldn't [e.g. reading a length, not
sanity checking, then memcpy sploit code into a trusted region].
or
2. Applying crypto incorrectly [or the wrong crypto].
Like their USB firmware update token which used an HMAC authentication
scheme which of course was totally broken leading to yet more
exploits, etc and so on.
It's stories like these that we try to get customers to focus on when
they're arguing about spending money on secure execution
environments... Yet they always want to cut corners... tisk tisk
tisk...
Tom
There are a few exceptions to that:
1) When using RSA with CRT, an error during one of the two main
modular exponentiation (corner case in the quotient estimation,
power glitch, random error due to cosmic ray, deliberate poke
on stack...) can reveal the private key. See:
On the importance of checking cryptographic protocols for faults
Dan Boneh, Richard A. DeMillo and Richard J. Lipton (Bellcore labs).
Extended abstract in Proceedings of Eurocrypt 1997
Full paper in Journal of Cryptology, Springer-Verlag, Vol. 14, No. 2, pp. 101--119, 2001
http://www.springerlink.com/content/8mn7cw1h1f24kdc6/
http://www.springerlink.com/content/cljfg7u5n4bw312a/
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.9764
http://crypto.stanford.edu/~dabo/pubs/abstracts/faults.html
2) Various side-channel leakage can do the same; see e.g.
Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems
Paul Kocher, Proceeding of Crypto 1996
http://www.springerlink.com/content/4el17cvre3gxt4gd/
http://www.cryptography.com/public/pdf/TimingAttacks.pdf
3) When using an RSA variant with even exponent (Rabin cryptosystem)
and a poor padding scheme, a few (like two) signatures reveal private
key. See
A Chosen Messages Attack on the ISO/IEC 9796-1 Signature Scheme.
François Grieu, Proceeding of Eurocrypt 2000
http://www.springerlink.com/content/hhjd2j9bglfnl341/
http://fragrieu.free.fr/paper9796.pdf
Francois Grieu
I read the article quite interesting. Also read the article that
I stumbled into from that site.
http://rdist.root.org/2009/05/17/the-debian-pgp-disaster-that-almost-was/
which contain among other things the following quote.
"The lesson is that in crypto, any partial knowledge you give an
attacker can possibly result in a complete compromise. It is extremely
fragile."
From years of posting on this forum some of us lower valued people
have
commented on the dangers of allowing any kind of partial information
in ciphertext. But it always seems to land on deaf ears. I for one
have frequently called for bijective compression and bijective padding
but the elite reject it saying things like AES is perfect no need to
worry
about how much info added in during padding and compression and such
since
its provably secure. Any one with basic knowledge should realize that
its
not probably secure especially if you design padding and compression
to be
insecure. I suspect these are the same people educating the young of
the world that CO2 is bad will need to limited. What nonsense. Also
the
lie of global warming. Yes get mad at me for that too. Its cold
outside.
David A. Scott
--
My Crypto code
http://bijective.dogma.net/crypto/scott19u.zip
http://www.jim.com/jamesd/Kong/scott19u.zip old version
My Compression code http://bijective.dogma.net/
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"
<snip your unbroken diatribe>
Except if you read the results properly nothing here had ANYTHING to
do with breaking AES, chaining modes, or heck even EC-DSA. It had
everything to do with how they glued it together.
It's like they encrypted with DS-super-duper-19-trillion-bit-key-
bijective-mega-crypt with a default key of all zero.
Of course you'd know that, if you read *AND* understood the results.
Instead, you use any break in modern cryptosystems whether applied to
the algorithms or the system as a soapbox to peddle your mediocre
designs...
Tom
>This being said, the weakness of DSA/ECDSA with regard to weak random
>generators is a real drawback of that algorithm that is usually very little
>talked about.
It goes a long way beyond that, see the discussion in the root labs article
referenced earlier, DSA/ECDSA are extremely brittle algorithms. With RSA as
long as you remember to use encode-and-memcmp for your sig check there's not
much that can go wrong, while the DLP algorithms have quite a number of very
subtle problems that you have to carefully work around. What's more worrying
is that every few years some new one crops up that seems to affect most of the
implementations around at the time. It's the unknown unknows that'll get you
in the end.
Peter.
While that is true of the signature process it's not true of keygen.
People have gotten that wrong in the exact same way Sony fucked up
their DSA application. Moral of the story here is to sanity check
your applications. If you generate two keys or two signatures [non
PKCS #1 v1.5] in a row and get the same bytestream out ... you're
doing something wrong.
Tom
> On Jan 4, 6:07 pm, biject <biject.b...@gmail.com> wrote:
>> "The lesson is that in crypto, any partial knowledge you give an
>> attacker can possibly result in a complete compromise. It is extremely
>> fragile."
>>
>> From years of posting on this forum some of us lower valued people
>> have
>> commented on the dangers of allowing any kind of partial information
>> in ciphertext. But it always seems to land on deaf ears. I for one
>> have frequently called for bijective compression and bijective padding
>> but the elite reject it saying things like AES is perfect no need to
>> worry
>> about how much info added in during padding and compression and such
>> since
>
> <snip your unbroken diatribe>
>
> Except if you read the results properly nothing here had ANYTHING to
> do with breaking AES, chaining modes, or heck even EC-DSA. It had
> everything to do with how they glued it together.
And the nature of the application surely makes it inevitable that it'll
have biject's (claimed) vulnerability? The whole point is that a PS3
(with its associated software) can verify whether a blob has been
correctly signed or not, and there's just no way to avoid that, though
with cunning hardware and things you might be able to mitigate it (by
hiding the public key carefully in hardware which verifies slowly, that
kind of thing).
[...]
EC-DSA was broken in this case because they failed to seed their PRNG
correctly [at all, who knows]. It has nothing to do with algorithm
choice. Using "bijective signatures" or whatever he was suggesting is
not the solution.
Tom
> EC-DSA was broken in this case because they failed to seed their PRNG
> correctly [at all, who knows]. It has nothing to do with algorithm
> choice.
They reused the same nonce for every message. Doh!
cf. Console Hacking 2010 Part 3 (from the 5:30 mark)
http://www.youtube.com/watch?v=84WI-jSgNMQ
The textbook ECDSA asks for better than a nonce (number used once) for
k used in each signature. k is supposed to be a secret random number.
If some relation is known between k in different signatures,
all hells may break loose, according to the well informed
http://rdist.root.org/2010/11/19/dsa-requirements-for-random-k-value
Francois Grieu
Thanks for the link :-)
cf. also slides 120-130 in fail0verflow's presentation
http://events.ccc.de/congress/2010/Fahrplan/attachments/1780_27c3_console_hacking_2010.pdf
(link given in the Wikipedia article)
http://en.wikipedia.org/wiki/Elliptic_Curve_DSA
cf. also http://fail0verflow.com/
(very barren at the moment)