I have two X.509 certificates MUPCAGradjani.crt and MUPCARoot.crt
downloaded from http://ca.mup.gov.rs/sertifikati-lat.html
Certificate path is MUPCARoot > MUPCAGradjani and I would like to
validate MUPCAGradjani against the other. What I did is to convert both
to PEM format and rename them by hash as efd6650d.0 (Gradjani) and
fc5fe32d.0 (Root) using this script:
#!/bin/bash
hash=`openssl x509 -in $1 -inform DER -noout -hash`
echo "Saving $1 as $hash.0"
openssl x509 -in $1 -inform DER -out $hash.0 -outform PEM
Now I run:
$ openssl verify -CApath . efd6650d.0
error 7 at 0 depth lookup:certificate signature failure
16206:error:04077068:rsa routines:RSA_verify:bad signature:rsa_sign.c:255:
16206:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP lib:a_verify.c:173:</pre>
Hm, that is not working. What am I doing wrong here?
I am running OpenSSL 0.9.8k 25 Mar 2009 on Ubuntu 10.04 GNU/Linux. I
also have my personal certificate issued by MUPCAGradjani that I would
like to verify but it is failing with the same error (just one level
down):
$ openssl verify -CApath . qualified.pem
qualified.pem: /CN=MUPCA Gradjani/O=MUP Republike Srbije/L=Beograd/C=Republika Srbija (RS)
error 7 at 1 depth lookup:certificate signature failure
16258:error:04077068:rsa routines:RSA_verify:bad signature:rsa_sign.c:255:
16258:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP lib:a_verify.c:173:</pre>
When I install downloaded certificates in Windows using Internet
Explorer and doubleclick on my personal certificate (qualified.cer) it
looks valid. I am not sure, but I believe it is doing certificate chain
validation so the certificates and paths should be valid. After all they
are issued by a trustful CA.
Output of "openssl x509 -nameopt multiline,utf8,-esc_msb -noout -text
-in $1" looks reasonable for both downloaded certificates and is the
same before and after conversion to PEM (using -inform DER in the first
case). My take on this is that I am not doing conversion properly or
maybe the original certificates are in some other format requiring extra
argument, but I can not find answer in the docs.
How can I properly validate X.509 certificate from
http://ca.mup.gov.rs/sertifikati-lat.html by certificate chain?
Kind regards,
Goran
______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openss...@openssl.org
Automated List Manager majo...@openssl.org
The problem you are encountering is partly caused by the way OpenSSL
handles integers whose DER encoded value starts with one or more zeros :
in this case, OpenSSL removes the leading zero when creating the
corresponding ASN1_INTEGER structure thus leading to the fact that
computed DER of this structure and the original one will be different!!
In your case, the certificate you are trying to verify has a DER encoded
serial number "00 00 65". So, OpenSSL will create an ASN1_INTEGER with a
value of "00 65". And in the course of the certificate signature
verification, this structure will be encoded to DER which will lead to a
encoded value of "00 65". Thus, the generated DER of the CertInfo will
be different from the original one, which explains why the signature
verification fails.
After some digging, I found that part of the problem is caused by the
functions c2i_ASN1_INTEGER and d2i_ASN1_UINTEGER in file
crypto\asn1\a_int.c. At lines 244 and 314, there is an if block that
removes any leading zeros. Commenting out these blocks solves the DER
encoding mismatch but the verification still fails because the computed
digest is different from the recovered one.
I will continue my investigation to find all the culprits.
Meanwhile, the question remains why in the first place the removal of
the leading zero from the parsed DER encoding was added since this
clearly have the side effect of making the computed DER different from
the original one.
Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr
whenever you use openssl x509 -in -out ... you remove one leading 0 octet.
IMHO openssl should reject the cert because of invalid encoding.
> Development Mailing List opens...@openssl.org
Although the certificate's encoding of the serial number field breaks the
BER specification about the minimal bytes representation, it is known that
many CA's and libraries treat this field as a blob and usually encode it
on a fixed length basis without caring about leading zeros.
Specifically, Peter Gutmann in his X.509 Style Guide says this about this
field : "If you're writing certificate-handling code, just treat the
serial number as a blob which happens to be an encoded integer".
Moreover, major PKI libraries are tolerant vis-a-vis the encoding of the
serial number field of a certificate and they verify successfully the
certificate chain given by the original poster.
For example, NSS, GnuTLS and CryptoAPI accept the given certificates and
verify successfully their trust.
Supporting or not specific broken implementations have always been the
subject of heated debates. Concerning the specific issue here, it's clear
that OpenSSL is too restrictive compared to other major libraries since
this is a minor deviation from the BER specs (i.e. minimal bytes
representation) and thus hurts deployments of real-world certificates.
A text written 10 years ago is not really an excuse for a certificate
from this year.
> Moreover, major PKI libraries are tolerant vis-a-vis the encoding of the
> serial number field of a certificate and they verify successfully the
> certificate chain given by the original poster.
>
So what. The certs are still wrong.
> For example, NSS, GnuTLS and CryptoAPI accept the given certificates and
> verify successfully their trust.
>
hm, inserting the certs into Firefox says to me that the
certs cannot be validated for unknown reasons.
The decoders in NSS and GnuTLS accept all kinds of
bad encodings, the BER/DER decoders being very
tolerant.
> Supporting or not specific broken implementations have always been the
> subject of heated debates.
X509 has been updated to decode and reencode a certificate,
in this sense openssl's behaviour of silently dropping one
octet is not very nice. But there are other potential minor
deviations.
> Concerning the specific issue here, it's clear
> that OpenSSL is too restrictive compared to other major libraries since
> this is a minor deviation from the BER specs (i.e. minimal bytes
> representation) and thus hurts deployments of real-world certificates.
>
Others are EXTREMLY permissive in decoding.
This minor deviation results in ambiguous DER. Assumed two
values 0001 or 01, are these the same serialnumber, or not?
This is asking for real trouble. Even when taking as a blob,
displaying will show 1 for both in "major" implementations.
I'd rather see openssl be more restrictive and reject bad encodings
(I am not talking about a negative number here).
and what about version:
0206000000000002
0206012300000002
some treat the second as a v3
>
> Concerning Firefox check, I have managed to load the chain and to
> validate it correctly using Firefox 3.6.8 under Windows and Ubuntu
> 10.04. I'm attaching screenshots.
Try edit the trustsetting.
Or: Try load them without setting any trust during loading
and to set some later through the certificate management.
> The encoding is invalid BER.
> The openssl is tolerant but also destructive in copy.
>
> whenever you use openssl x509 -in -out ... you remove one
> leading 0 octet.
>
> IMHO openssl should reject the cert because of invalid encoding.
>
>
> On 08/29/2010 04:17 AM, Mounir IDRASSI wrote:
> > Hi,
> >
> > The problem you are encountering is partly caused by the
> way OpenSSL
> > handles integers whose DER encoded value starts with one or
> more zeros
> > : in this case, OpenSSL removes the leading zero when creating the
> > corresponding ASN1_INTEGER structure thus leading to the fact that
> > computed DER of this structure and the original one will be
> different!!
> >
Nit: redundant leading 00 (or FF) in an INTEGER is VALID *B*ER
but INVALID *D*ER. And signed things like certs are *D*ER
for exactly this reason, so a reconstructed encoding is
bit for bit identical and hashes and signatures etc. work.
X.690:
8 Basic encoding rules
...
8.3 Encoding of an integer value
8.3.1 The encoding of an integer value shall be primitive. The contents
octets shall consist of one or more octets.
8.3.2 If the contents octets of an integer value encoding consist of
more than one octet, then the bits of the first octet
and bit 8 of the second octet:
a) shall not all be ones; and
b) shall not all be zero.
NOTE – These rules ensure that an integer value is always encoded in the
smallest possible number of octets.
8.3.3 The contents octets shall be a two's complement binary number
equal to the integer value, and consisting of
bits 8 to 1 of the first octet, followed by bits 8 to 1 of the second
octet, followed by bits 8 to 1 of each octet in turn up to
and including the last octet of the contents octets.
NOTE – The value of a two's complement binary number is derived by
numbering the bits in the contents octets, starting with bit 1 of the
last octet as bit zero and ending the numbering with bit 8 of the first
octet. Each bit is assigned a numerical value of 2N,where N is its
position in the above numbering sequence. The value of the two's
complement binary number is obtained by summing the numerical values
assigned to each bit for those bits which are set to one, excluding bit
8 of the first octet, and then reducing this value by the numerical
value assigned to bit 8 of the first octet if that bit is set to one.
Chapter 10 and 11 don't say anything about INTEGER.
The length field in definite encoding may have redundant zeros though in BER
DER:
10.1 Length forms
The definite form of length encoding shall be used, encoded in the
minimum number of octets. [Contrast
with 8.1.3.2 b).]
This is the kind of advice that pushes programmers to allocate fixed
size fields in databases, and consider a certificate's serial number
to always fit the size. This is also bad in practice.
--
Erwann ABALEA <erwann...@keynectis.com>
Département R&D
KEYNECTIS
Thank you, I can confirm that your suggestion is working.
Applying a patch that you described does solve a problem for me. The
MUPCAGradjani certificate can be verified against the MUPCARoot, as well
as certificates issued by the MUPCAGradjani, like the two personal
certificates I have on my eID card. I had to reconvert DER to PEM with
patched openssl to get PEM certificates with "correct" serial number
encoding.
I read the other messages in this thread, but I am not an expert in the
field so I do not know if openssl should add a support for "incorrect"
serial numbers. In RFC 3280 there is a note about "Non-conforming CAs"
where section "4.1.2.2 Serial number" is saying that "certificate users
SHOULD be prepared to gracefully handle such certificates". Maybe the
note can apply in this case?
What I do know is that without a patch openssl can not be used with
certificates issued on a Serbian national eID card. At least one other
Serbian CA is hit by the same problem (http://ca.pks.rs/certs/) where
PKI solution was provided by a same company.
I have published patched openssl package for Ubuntu GNU/Linux
distribution in my Ubuntu PPA at:
https://launchpad.net/~grakic/+archive/serbian-eid
Kind regards,
Goran Rakic
These are not X.509 certificates, since they're not correctly encoded
(not DER, not even BER).
The paragraph you're mentioning is about the value of the serial
number (strictly positive, no more than 20 bytes), not about its
encoding. A serial number can be negative, or larger than 20 bytes
when encoded, if your only goal is to be X.509 compliant, and not
RFC5280 compliant. Whence, "non-conforming CAs" here is to be
understood as "non-RFC5280-conforming CAs".
Those certificates should have been rejected by any correct validator
(human or machine) before going into production. The serial number is
encoded using 4 bytes as its value, it should be 1 byte only.
--
Erwann ABALEA <erwann...@keynectis.com>
Département R&D
KEYNECTIS
> ?? ??????, 29. 08 2010. ?? 04:17 +0200, Mounir IDRASSI ????????:
> >
> > After some digging, I found that part of the problem is caused by the
> > functions c2i_ASN1_INTEGER and d2i_ASN1_UINTEGER in file
> > crypto\asn1\a_int.c. At lines 244 and 314, there is an if block that
> > removes any leading zeros. Commenting out these blocks solves the DER
> > encoding mismatch but the verification still fails because the computed
> > digest is different from the recovered one.
>
> Thank you, I can confirm that your suggestion is working.
>
> Applying a patch that you described does solve a problem for me. The
> MUPCAGradjani certificate can be verified against the MUPCARoot, as well
> as certificates issued by the MUPCAGradjani, like the two personal
> certificates I have on my eID card. I had to reconvert DER to PEM with
> patched openssl to get PEM certificates with "correct" serial number
> encoding.
>
> I read the other messages in this thread, but I am not an expert in the
> field so I do not know if openssl should add a support for "incorrect"
> serial numbers. In RFC 3280 there is a note about "Non-conforming CAs"
> where section "4.1.2.2 Serial number" is saying that "certificate users
> SHOULD be prepared to gracefully handle such certificates". Maybe the
> note can apply in this case?
>
> What I do know is that without a patch openssl can not be used with
> certificates issued on a Serbian national eID card. At least one other
> Serbian CA is hit by the same problem (http://ca.pks.rs/certs/) where
> PKI solution was provided by a same company.
>
> I have published patched openssl package for Ubuntu GNU/Linux
> distribution in my Ubuntu PPA at:
> https://launchpad.net/~grakic/+archive/serbian-eid
>
I wouldn't advise changing the code in that way (FYI I wrote it). The normal
workaround in OpenSSL for broken encodings is to use the original encoding
by caching it. The attached three line patch adds this workaround for
certificates.
Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org
Thanks Stephen. This preprocessor black magic looks very interesting, I
will spend some free time trying to understand it in the following days.
I read your message on openssl-dev about the issue with a dirty cache.
As a naive code reader, I am wondering could not the "modified" field in
the cached data be set whenever certificate data is modified to
invalidate the cache? Will this allow integrating this patch upstream?
Kind regards,
Goran Rakic
> ?? ??????, 30. 08 2010. ?? 20:38 +0200, Dr. Stephen Henson ????????:
> >
> > I wouldn't advise changing the code in that way (FYI I wrote it). The normal
> > workaround in OpenSSL for broken encodings is to use the original encoding
> > by caching it. The attached three line patch adds this workaround for
> > certificates.
>
> Thanks Stephen. This preprocessor black magic looks very interesting, I
> will spend some free time trying to understand it in the following days.
>
Well it is buried in the ASN1 code. All it does is uses an extra structure to
save the received encoding. Then when signatures are calculated that is used
instead of re-encoding the parsed out structure.
> I read your message on openssl-dev about the issue with a dirty cache.
> As a naive code reader, I am wondering could not the "modified" field in
> the cached data be set whenever certificate data is modified to
> invalidate the cache? Will this allow integrating this patch upstream?
>
It isn't possible to cover all cases where the certificate data is modified as
some don't keep a reference to the parent certificate structure.
However it is possible to always re-encode when a certificate is signed (this
is done for CRLs) which should cover all cases except pathological ones where
a certificate is modified and not re-signed to deliberately produce invalid
signatures.
Steve.
--
Dr Stephen N. Henson. OpenSSL project core developer.
Commercial tech support now available see: http://www.openssl.org