Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: Verify X.509 certificate, openssl verify returns bad signature

205 views
Skip to first unread message

Mounir IDRASSI

unread,
Aug 28, 2010, 10:17:35 PM8/28/10
to
Hi,

The problem you are encountering is partly caused by the way OpenSSL
handles integers whose DER encoded value starts with one or more zeros :
in this case, OpenSSL removes the leading zero when creating the
corresponding ASN1_INTEGER structure thus leading to the fact that
computed DER of this structure and the original one will be different!!

In your case, the certificate you are trying to verify has a DER encoded
serial number "00 00 65". So, OpenSSL will create an ASN1_INTEGER with a
value of "00 65". And in the course of the certificate signature
verification, this structure will be encoded to DER which will lead to a
encoded value of "00 65". Thus, the generated DER of the CertInfo will
be different from the original one, which explains why the signature
verification fails.

After some digging, I found that part of the problem is caused by the
functions c2i_ASN1_INTEGER and d2i_ASN1_UINTEGER in file
crypto\asn1\a_int.c. At lines 244 and 314, there is an if block that
removes any leading zeros. Commenting out these blocks solves the DER
encoding mismatch but the verification still fails because the computed
digest is different from the recovered one.

I will continue my investigation to find all the culprits.
Meanwhile, the question remains why in the first place the removal of
the leading zero from the parsed DER encoding was added since this
clearly have the side effect of making the computed DER different from
the original one.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr


On 8/28/2010 10:43 PM, Goran Rakic wrote:
> Hi all,
>
> I have two X.509 certificates MUPCAGradjani.crt and MUPCARoot.crt
> downloaded from http://ca.mup.gov.rs/sertifikati-lat.html
>
> Certificate path is MUPCARoot> MUPCAGradjani and I would like to
> validate MUPCAGradjani against the other. What I did is to convert both
> to PEM format and rename them by hash as efd6650d.0 (Gradjani) and
> fc5fe32d.0 (Root) using this script:
>
> #!/bin/bash
> hash=`openssl x509 -in $1 -inform DER -noout -hash`
> echo "Saving $1 as $hash.0"
> openssl x509 -in $1 -inform DER -out $hash.0 -outform PEM
>
> Now I run:
>
> $ openssl verify -CApath . efd6650d.0
> error 7 at 0 depth lookup:certificate signature failure
> 16206:error:04077068:rsa routines:RSA_verify:bad signature:rsa_sign.c:255:
> 16206:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP lib:a_verify.c:173:</pre>
>
> Hm, that is not working. What am I doing wrong here?
>
> I am running OpenSSL 0.9.8k 25 Mar 2009 on Ubuntu 10.04 GNU/Linux. I
> also have my personal certificate issued by MUPCAGradjani that I would
> like to verify but it is failing with the same error (just one level
> down):
>
> $ openssl verify -CApath . qualified.pem
> qualified.pem: /CN=MUPCA Gradjani/O=MUP Republike Srbije/L=Beograd/C=Republika Srbija (RS)
> error 7 at 1 depth lookup:certificate signature failure
> 16258:error:04077068:rsa routines:RSA_verify:bad signature:rsa_sign.c:255:
> 16258:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP lib:a_verify.c:173:</pre>
>
> When I install downloaded certificates in Windows using Internet
> Explorer and doubleclick on my personal certificate (qualified.cer) it
> looks valid. I am not sure, but I believe it is doing certificate chain
> validation so the certificates and paths should be valid. After all they
> are issued by a trustful CA.
>
> Output of "openssl x509 -nameopt multiline,utf8,-esc_msb -noout -text
> -in $1" looks reasonable for both downloaded certificates and is the
> same before and after conversion to PEM (using -inform DER in the first
> case). My take on this is that I am not doing conversion properly or
> maybe the original certificates are in some other format requiring extra
> argument, but I can not find answer in the docs.
>
> How can I properly validate X.509 certificate from
> http://ca.mup.gov.rs/sertifikati-lat.html by certificate chain?
>
> Kind regards,
> Goran
>
>
> ______________________________________________________________________
> OpenSSL Project http://www.openssl.org
> User Support Mailing List openss...@openssl.org
> Automated List Manager majo...@openssl.org

______________________________________________________________________
OpenSSL Project http://www.openssl.org
Development Mailing List opens...@openssl.org
Automated List Manager majo...@openssl.org

Peter Sylvester

unread,
Aug 29, 2010, 5:43:37 AM8/29/10
to
The encoding is invalid BER.
The openssl is tolerant but also destructive in copy.

whenever you use openssl x509 -in -out ... you remove one leading 0 octet.

IMHO openssl should reject the cert because of invalid encoding.

Mounir IDRASSI

unread,
Aug 29, 2010, 7:20:18 AM8/29/10
to
Hi Peter,

Although the certificate's encoding of the serial number field breaks the
BER specification about the minimal bytes representation, it is known that
many CA's and libraries treat this field as a blob and usually encode it
on a fixed length basis without caring about leading zeros.
Specifically, Peter Gutmann in his X.509 Style Guide says this about this
field : "If you're writing certificate-handling code, just treat the
serial number as a blob which happens to be an encoded integer".

Moreover, major PKI libraries are tolerant vis-a-vis the encoding of the
serial number field of a certificate and they verify successfully the
certificate chain given by the original poster.

For example, NSS, GnuTLS and CryptoAPI accept the given certificates and
verify successfully their trust.

Supporting or not specific broken implementations have always been the
subject of heated debates. Concerning the specific issue here, it's clear
that OpenSSL is too restrictive compared to other major libraries since
this is a minor deviation from the BER specs (i.e. minimal bytes
representation) and thus hurts deployments of real-world certificates.

Peter Sylvester

unread,
Aug 29, 2010, 12:17:45 PM8/29/10
to
On 08/29/2010 01:20 PM, Mounir IDRASSI wrote:
> Hi Peter,
>
> Although the certificate's encoding of the serial number field breaks the
> BER specification about the minimal bytes representation, it is known that
> many CA's and libraries treat this field as a blob and usually encode it
> on a fixed length basis without caring about leading zeros.
> Specifically, Peter Gutmann in his X.509 Style Guide says this about this
> field : "If you're writing certificate-handling code, just treat the
> serial number as a blob which happens to be an encoded integer".
>
You are citing out of context.
There is a reference to negative integers which can happen 50%.

A text written 10 years ago is not really an excuse for a certificate
from this year.

> Moreover, major PKI libraries are tolerant vis-a-vis the encoding of the
> serial number field of a certificate and they verify successfully the
> certificate chain given by the original poster.
>

So what. The certs are still wrong.


> For example, NSS, GnuTLS and CryptoAPI accept the given certificates and
> verify successfully their trust.
>

hm, inserting the certs into Firefox says to me that the
certs cannot be validated for unknown reasons.

The decoders in NSS and GnuTLS accept all kinds of
bad encodings, the BER/DER decoders being very
tolerant.


> Supporting or not specific broken implementations have always been the
> subject of heated debates.

X509 has been updated to decode and reencode a certificate,
in this sense openssl's behaviour of silently dropping one
octet is not very nice. But there are other potential minor
deviations.

> Concerning the specific issue here, it's clear
> that OpenSSL is too restrictive compared to other major libraries since
> this is a minor deviation from the BER specs (i.e. minimal bytes
> representation) and thus hurts deployments of real-world certificates.
>

Others are EXTREMLY permissive in decoding.

This minor deviation results in ambiguous DER. Assumed two
values 0001 or 01, are these the same serialnumber, or not?
This is asking for real trouble. Even when taking as a blob,
displaying will show 1 for both in "major" implementations.

I'd rather see openssl be more restrictive and reject bad encodings
(I am not talking about a negative number here).

and what about version:

0206000000000002
0206012300000002

some treat the second as a v3

Peter Sylvester

unread,
Aug 29, 2010, 4:26:09 PM8/29/10
to
On 08/29/2010 07:38 PM, Mounir IDRASSI wrote:
> Hi Peter,
>
> Thank you for your comments.
> As I said, this kind of debates can be very heated and going down this
> road don't lead usually to any results.
The debate may be whether and how something should be
done in openssl, I admit I had started that one.
> I am the first one to wish that the PKI world out there is ideal and
> everyone uses correctly validated modules. Unfortunately, we
> constantly have to balance between correctness and practicalness.
Some programs are not strict in verification, so be it.
But that has nothing to do with the fact that the certs in question are
not correctly encoded and may create unexpected behaviour...

>
> Concerning Firefox check, I have managed to load the chain and to
> validate it correctly using Firefox 3.6.8 under Windows and Ubuntu
> 10.04. I'm attaching screenshots.
Try edit the trustsetting.

Or: Try load them without setting any trust during loading
and to set some later through the certificate management.

Dr. Stephen Henson

unread,
Aug 29, 2010, 7:27:23 PM8/29/10
to
Just to add a data point to this discussion. There is a mechanism in OpenSSL
to avoid reencoding an ASN1 structure and to just cache the received encoding.

This is currently used in a few places already for various reasons. This has
an advantage in that it makes certificate verification quicker and avoids the
need to allocate further memory. On the minus side any application that
modifies a certificate structure and re-signs it will no longer work as it
wont recognises the cache is dirty.

As a quick test I updated the certificate definition to use a cached encoding
instead (3 line change) and the certificates now verify fine.

Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org

Erwann ABALEA

unread,
Aug 30, 2010, 4:53:16 AM8/30/10
to
Hodie IV Kal. Sep. MMX, Mounir IDRASSI scripsit:
[...]

> Specifically, Peter Gutmann in his X.509 Style Guide says this about this
> field : "If you're writing certificate-handling code, just treat the
> serial number as a blob which happens to be an encoded integer".

This is the kind of advice that pushes programmers to allocate fixed
size fields in databases, and consider a certificate's serial number
to always fit the size. This is also bad in practice.

--
Erwann ABALEA <erwann...@keynectis.com>
Département R&D
KEYNECTIS

Goran Rakic

unread,
Aug 30, 2010, 4:22:17 PM8/30/10
to
У пон, 30. 08 2010. у 20:38 +0200, Dr. Stephen Henson пише:
>
> I wouldn't advise changing the code in that way (FYI I wrote it). The normal
> workaround in OpenSSL for broken encodings is to use the original encoding
> by caching it. The attached three line patch adds this workaround for
> certificates.

Thanks Stephen. This preprocessor black magic looks very interesting, I
will spend some free time trying to understand it in the following days.

I read your message on openssl-dev about the issue with a dirty cache.
As a naive code reader, I am wondering could not the "modified" field in
the cached data be set whenever certificate data is modified to
invalidate the cache? Will this allow integrating this patch upstream?

Kind regards,
Goran Rakic

0 new messages