>DSA and ECDSA signatures are only secure if the hash algorithm is specified
>in the certificate, presumably as part of the AlgorithmIdentifier in the
>SubjectPublicKeyInfo.
It's in the (badly-named) signature field of the cert, if it was in the
signatureAlgorithm it wouldn't be covered by the sig. Having said that, I
don't know how many implementations actually check whether what's in the
signature corresponds to the signatureAlgorithm, I tried it many years ago
(md5With... vs sha1With...) and nothing much seemed to notice, as long as the
signatureAlgorithm was the one that was correct for the signature.
Peter.
>Surprised you didn't know that, considering who you are.
Uhh, I did know that, that's why I checked it about 15-odd years ago to see if
anything would notice if it was done wrong (I can't remember the exact date,
it was when I was still updating the X.509 Style Guide so around 1999 or
2000). At the time, it was OK with DLP sigs as well since the only thing that
was allowed was DSA + SHA-1, and obviously PKCS #1 was fine too, it was 9796-2
certs that had problems, but then they were barely used by anyone except some
European banks.
Now, we have ECDSA, which not only has this problem in spades but even has a
requirement (X9.62) to truncate the hash to fit the group order. So you can
swap in any hash you want (that's specified for use with ECDSA) and standards-
compliant code will helpfully adjust it for you to make sure the attack works.
Certs protect against this (assuming the code checks the signature vs.
signatureAlgorithm), but generalised signing doesn't, e.g. with both PGP and
CMS / S/MIME the hash function identifier isn't an authenticated attribute.
This is why CMS authenticated-enveloped-data MACs the
EncryptedContentInfo.ContentEncryptionAlgorithmIdentifier, so you can't
manipulate the algorithm parameters:
The EncryptedContentInfo.ContentEncryptionAlgorithmIdentifier must be
protected alongside the encrypted content; otherwise, an attacker
could manipulate the encrypted data indirectly by manipulating the
encryption algorithm parameters, which wouldn't be detected through
MACing the encrypted content alone. For example, by changing the
encryption IV, it's possible to modify the results of the decryption
after the encrypted data has been verified via a MAC check.
The test vectors then say:
For the triple DES-encrypted data, corrupting a byte at positions
192-208 can be used to check that payload-data corruption is
detected, and corrupting a byte at positions 168-174 can be used to
check that metadata corruption is detected.
to make sure that the code to check for problems is working as required.
Peter.
>This changed the moment SHA-2 came out, though one could interpret that the
>length of the signature elements would uniquely indicate the SHA variant.
>With SHA-256/160 and SHA-3, that is completely gone as there are now two SHA
>hash algorithms for each length. Plus any non-FIPS hash used outside FIPS-
>restricted contexts.
Thus the obvious lesson, "don't implement weirdo hash algorithms" (or more
than the minimum you need), although as I pointed out, even though no-one
should be using crazy stuff like SHA-256/160, X9.62 gives you that whether you
actually implement it or not. Even if you only support the universal-standard
SHA-1 and SHA-256 (and nothing else), you're still vulnerable.
Mind you I wonder how serious it really is, since you're signing with e.g. the
full 256 bits from SHA-256 but verifying with only 160 bits from SHA-1 it's a
lot more work than just finding a collision, you'd have to find a situation
where SHA-1( m ) x c mod n == SHA-256( m ) x c mod n, not just where SHA-1 ==
trunc160( SHA-256 ).
Peter.