Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

stateOrProvinceName field problem when signing CSR

1,319 views
Skip to first unread message

Mick

unread,
Dec 15, 2011, 5:01:55 PM12/15/11
to
Hi All,

I've generated a cakey.pem and cacert.pem on my PC. Uploaded the cacert.pem
to my router and used its gui to generate a CSR.

When I try to sign this CSR file back on my PC I'm getting this error:
=====================================
$ openssl ca -config ./openssl_VPN.cnf -days 1095 -cert cacert_VPN.pem -keyfile
VPN_CA/private/cakey_VPN.pem -infiles certificate-router-request
Using configuration from ./openssl_VPN.cnf
Enter pass phrase for VPN_CA/private/cakey_VPN.pem:
Check that the request matches the signature
Signature ok
The stateOrProvinceName field needed to be the same in the
CA certificate (Buckinghamshire) and the request (Buckinghamshire)
=====================================

I don't understand why I get this error. Both cacert and certificate-router-
request files contain exactly the same ST= field. The cacert_VPN.pem shows:

Issuer: C=GB, ST=Buckinghamshire, L= [snip ...]
Subject: C=GB, ST=Buckinghamshire, L= [snip ...]

and the CSR shows:

Subject: C=GB, ST=Buckinghamshire, L= [snip ...]


Under the CA policy options in the configuration file I have:

# For the CA policy
[ policy_match ]
countryName = match
stateOrProvinceName = match
organizationName = match
organizationalUnitName = optional
commonName = supplied
emailAddress = optional

but given that the entries are the same, I am not sure why I get this error.
Any suggestions?
--
Regards,
Mick
signature.asc

Jakob Bohm

unread,
Dec 16, 2011, 6:23:06 AM12/16/11
to
On 12/15/2011 11:01 PM, Mick wrote:
> Hi All,
>
> I've generated a cakey.pem and cacert.pem on my PC. Uploaded the cacert.pem
> to my router and used its gui to generate a CSR.
>
> When I try to sign this CSR file back on my PC I'm getting this error:
> =====================================
> $ openssl ca -config ./openssl_VPN.cnf -days 1095 -cert cacert_VPN.pem -keyfile
> VPN_CA/private/cakey_VPN.pem -infiles certificate-router-request
> Using configuration from ./openssl_VPN.cnf
> Enter pass phrase for VPN_CA/private/cakey_VPN.pem:
> Check that the request matches the signature
> Signature ok
> The stateOrProvinceName field needed to be the same in the
> CA certificate (Buckinghamshire) and the request (Buckinghamshire)
> =====================================
>
> I don't understand why I get this error. Both cacert and certificate-router-
> request files contain exactly the same ST= field. The cacert_VPN.pem shows:
>
> Issuer: C=GB, ST=Buckinghamshire, L= [snip ...]
> Subject: C=GB, ST=Buckinghamshire, L= [snip ...]
>
> and the CSR shows:
>
> Subject: C=GB, ST=Buckinghamshire, L= [snip ...]
Try repeating those output commands with the option

-nameopt multiline,show_type

to determine if the two disagree on the character encoding,
spacing or other subtle aspect of the ST= part of the name.

If it turns out to be such a subtle difference, please report
it back to the list as a bug in the

>
> Under the CA policy options in the configuration file I have:
>
> # For the CA policy
> [ policy_match ]
> countryName = match
> stateOrProvinceName = match
> organizationName = match
> organizationalUnitName = optional
> commonName = supplied
> emailAddress = optional
>
> but given that the entries are the same, I am not sure why I get this error.
> Any suggestions?

______________________________________________________________________
OpenSSL Project http://www.openssl.org
User Support Mailing List openss...@openssl.org
Automated List Manager majo...@openssl.org

Jakob Bohm

unread,
Dec 16, 2011, 6:31:59 AM12/16/11
to
(Sorry, accidentally hit send, ignore previous mail)
it back to the list as a bug in the openssl code that handles
the "match" option, and as a workaround change the field to
"supplied" in the policy but manually inspect each CSR before
deciding to sign it (This would not work if the match is also
enforced by a path constraint in the CA cert).

If it turns out not to be such a subtle difference (or no
difference at all) please tell the list about it too.

Mick

unread,
Dec 16, 2011, 7:45:57 AM12/16/11
to
Bingo! :-)

The problem seems to be that the router CSR shows:

stateOrProvinceName = PRINTABLESTRING:Buckinghamshire

while the cacert_VPN.pem shows:

stateOrProvinceName = UTF8STRING:Buckinghamshire

The whole router Subject content is:

Subject:
countryName = PRINTABLESTRING:blah
stateOrProvinceName = PRINTABLESTRING:Buckinghamshire
localityName = PRINTABLESTRING:blah
organizationName = PRINTABLESTRING:blah
organizationalUnitName = PRINTABLESTRING:blah
commonName = T61STRING:blah

while the cacert is:

Subject:
countryName = PRINTABLESTRING:blah
stateOrProvinceName = UTF8STRING:Buckinghamshire
organizationName = UTF8STRING:blah
organizationalUnitName = UTF8STRING:blah
commonName = UTF8STRING:blah

> to determine if the two disagree on the character encoding,
> spacing or other subtle aspect of the ST= part of the name.
>
> If it turns out to be such a subtle difference, please report
> it back to the list as a bug in the openssl code that handles
> the "match" option, and as a workaround change the field to
> "supplied" in the policy but manually inspect each CSR before
> deciding to sign it (This would not work if the match is also
> enforced by a path constraint in the CA cert).

Before I read your message I changed the "match" option to "optional" for the
ST field. Then openssl complained about the organizationName and I changed
that to "optional too. It helped me to issue the certificates - but wasn't
sure if I was doing the right thing.


I have this in the openssl.cnf:

##############################################
[ req ]
default_bits = 2048
default_keyfile = privkey.pem
distinguished_name = req_distinguished_name
attributes = req_attributes
x509_extensions = v3_ca # The extentions to add to the self signed cert

# Passwords for private keys if not present they will be prompted for
# input_password = secret
# output_password = secret

# This sets a mask for permitted string types. There are several options.
# default: PrintableString, T61String, BMPString.
# pkix : PrintableString, BMPString (PKIX recommendation before 2004)
# utf8only: only UTF8Strings (PKIX recommendation after 2004).
# nombstr : PrintableString, T61String (no BMPStrings or UTF8Strings).
# MASK:XXXX a literal mask value.
# WARNING: ancient versions of Netscape crash on BMPStrings or UTF8Strings.
string_mask = utf8only
##############################################

but even when I replaced it with

string_mask = default

I got the same error. So eventually I left it as utf8only. What should this
option be?


Thank you for your help! :-)

--
Regards,
Mick
signature.asc

Jakob Bohm

unread,
Dec 16, 2011, 9:07:10 AM12/16/11
to
I think we may have a bug here, anyone from the core team
wish to comment on this.

The apparent bug:

When enforcing the "match" policy for a DN part, openssl reports an
error if the CSR has used a different string type for the field, but the
correct value (The naively expected behavior is to realize the strings
are identical and use the configured encoding for the resulting cert).

Related to this is to check what the behavior is for the following
similar operations:

2. Generating a cert from a CSR that uses a non-preferred string
type in its DN, here it should be explicit if the DN will be converted
to an enabled string type (e.g. from PrintableString to UTF8String or
from UTF8String to BMPString or from
UTF8StringWithOnlyBasicCharsPresent to PrintableString).

3. Validating a certificate whose issuing CA certificate specifies path
constraints where the issued certificate satisfies the path constraint
except for the exact choice of string type.

Technical note: All the defined string types have a well defined
mapping to and from 32 bit Unicode code points, with the following
one-way limitations:

BMPStrings can only represent U+0000 to U+10FFFF
(using UTF-16)

UTF8Strings can only represent U+0000 to U+7FFFFFFF
(allowing the possibility that some codepoints above U+10FFFF
may be assigned in the future, contrary to current policy).
(OpenSSL may or may not accept the CESU-8 and Java
Modified UTF-8 encoding variants and may or may not normalize
those to real UTF-8 before further processing).

PrintableString can only represent a specific small set of Unicode
code points

T61String can only represent a different specific small set.

Erwann Abalea

unread,
Dec 16, 2011, 9:22:41 AM12/16/11
to
Le 16/12/2011 15:07, Jakob Bohm a écrit :
> I think we may have a bug here, anyone from the core team
> wish to comment on this.
>
> The apparent bug:
>
> When enforcing the "match" policy for a DN part, openssl reports an
> error if the CSR has used a different string type for the field, but the
> correct value (The naively expected behavior is to realize the strings
> are identical and use the configured encoding for the resulting cert).

Do you expect the "openssl ca" tool to apply the complete X.520
comparison rules before checking the policy?

> 3. Validating a certificate whose issuing CA certificate specifies path
> constraints where the issued certificate satisfies the path constraint
> except for the exact choice of string type.

NameConstraints is a set of constraints imposed on the semantic value of
the name elements, not on their encoding (string type, double-spacing,
case differences, etc).

>
> Technical note: All the defined string types have a well defined
> mapping to and from 32 bit Unicode code points, with the following
> one-way limitations:
>
> BMPStrings can only represent U+0000 to U+10FFFF
> (using UTF-16)
>
> UTF8Strings can only represent U+0000 to U+7FFFFFFF
> (allowing the possibility that some codepoints above U+10FFFF
> may be assigned in the future, contrary to current policy).
> (OpenSSL may or may not accept the CESU-8 and Java
> Modified UTF-8 encoding variants and may or may not normalize
> those to real UTF-8 before further processing).
>
> PrintableString can only represent a specific small set of Unicode
> code points
>
> T61String can only represent a different specific small set.

T.61 has no "well defined" bidirectional mapping with UTF8.
That said, T.61 was withdrawn before 1993 (IIRC) and shouldn't be used.

--
Erwann ABALEA
-----
yétiscopique: relatif à certaines vapeurs des sommets himalayens

Jakob Bohm

unread,
Dec 16, 2011, 10:29:55 AM12/16/11
to
On 12/16/2011 3:22 PM, Erwann Abalea wrote:
> Le 16/12/2011 15:07, Jakob Bohm a écrit :
>> I think we may have a bug here, anyone from the core team
>> wish to comment on this.
>>
>> The apparent bug:
>>
>> When enforcing the "match" policy for a DN part, openssl reports an
>> error if the CSR has used a different string type for the field, but the
>> correct value (The naively expected behavior is to realize the strings
>> are identical and use the configured encoding for the resulting cert).
>
> Do you expect the "openssl ca" tool to apply the complete X.520
> comparison rules before checking the policy?
Not unless there are OpenSSL functions to do the work.

Otherwise I just expect it to apply the character set conversions it
uses for its
other operations (such as reading the config file/displaying DNs).

>
>> 3. Validating a certificate whose issuing CA certificate specifies path
>> constraints where the issued certificate satisfies the path constraint
>> except for the exact choice of string type.
>
> NameConstraints is a set of constraints imposed on the semantic value
> of the name elements, not on their encoding (string type,
> double-spacing, case differences, etc).
The question was how the OpenSSL code (library and command line) handle
the scenario, your answer seems to indicate that it is indeed supposed to
compare the semantic character sequence, not the encoding.
>
>>
>> Technical note: All the defined string types have a well defined
>> mapping to and from 32 bit Unicode code points, with the following
>> one-way limitations:
>>
>> BMPStrings can only represent U+0000 to U+10FFFF
>> (using UTF-16)
>>
>> UTF8Strings can only represent U+0000 to U+7FFFFFFF
>> (allowing the possibility that some codepoints above U+10FFFF
>> may be assigned in the future, contrary to current policy).
>> (OpenSSL may or may not accept the CESU-8 and Java
>> Modified UTF-8 encoding variants and may or may not normalize
>> those to real UTF-8 before further processing).
>>
>> PrintableString can only represent a specific small set of Unicode
>> code points
>>
>> T61String can only represent a different specific small set.
>
> T.61 has no "well defined" bidirectional mapping with UTF8.
> That said, T.61 was withdrawn before 1993 (IIRC) and shouldn't be used.
>
According to RFC1345, T.61 has a well defined mapping to named
characters also found in UNICODE. Some of those are encoded
as two bytes in T.61 (using a modifier+base char scheme), the
rest as one byte. That is what I mean by a bidirectional mapping
to a small (sub)set of UNICODE, meaning that most UNICODE
code points cannot be mapped to T.61, but the rest have a
bidirectional mapping.

According to the same source, 7-bit T.61 appears to be a proper
subset of 8-bit T.61.

Constructing a mapping table from the data in RFC1345 or other
sources is left as an exercise for the reader (cheat hint: Maybe
IBM included such a table in ICU or unicode.org included one in
its data files).

Lou Picciano

unread,
Dec 16, 2011, 10:33:27 AM12/16/11
to
Jakob, All,

Glad this is coming up again, as we are having similar problems. Like you, have string_mask = utf8only in config, and have never been able to embed UTF8 chars into certs.

We're using the OS X Terminal Program, which is (purports to be?) UTF8-capable. We can enter the subject line of the CSR with either UTF8 codes, or with the 'actual' characters, and get the same mileage:

openssl req -new -sha1 -nodes \
-nameopt multiline,show_type \
-keyout private/THORSTROM.key \
-out csrs/THORSTROM.csr \
-subj "/O=ESBJÖRN.com/OU=Esbjörn-Thörstrom Group/CN=Áki Thörstrom"

==== Have also tried this with Stephen's suggested: # -nameopt oneline,-esc_msb \

In either case, signing the cert attempts to output the escaped UTF8 codes, not the actual coded characters, into the respective fields. The 'sign the certificate?' prompt reports:

        Subject:
            organizationName          = ESBJ\C3\96RN.com
            organizationalUnitName    = Esbj\C3\B6rn-Th\C3\B6rstrom Group
            commonName                = \C3\81ki Th\C3\B6rstrom

Thanks, Lou Picciano


From: "Jakob Bohm" <jb-op...@wisemo.com>
To: openss...@openssl.org
Sent: Friday, December 16, 2011 9:07:10 AM
Subject: Re: stateOrProvinceName field problem when signing CSR


I think we may have a bug here, anyone from the core team
wish to comment on this.

The apparent bug:

When enforcing the "match" policy for a DN part, openssl reports an
error if the CSR has used a different string type for the field, but the
correct value (The naively expected behavior is to realize the strings
are identical and use the configured encoding for the resulting cert).

Related to this is to check what the behavior is for the following
similar operations:

2. Generating a cert from a CSR that uses a non-preferred string
type in its DN, here it should be explicit if the DN will be converted
to an enabled string type (e.g. from PrintableString to UTF8String or
from UTF8String to BMPString or from
UTF8StringWithOnlyBasicCharsPresent to PrintableString).

3. Validating a certificate whose issuing CA certificate specifies path
constraints where the issued certificate satisfies the path constraint
except for the exact choice of string type.

Technical note:  All the defined string types have a well defined
mapping to and from 32 bit Unicode code points, with the following
one-way limitations:

    BMPStrings can only represent U+0000 to U+10FFFF
       (using UTF-16)

    UTF8Strings can only represent U+0000 to U+7FFFFFFF
       (allowing the possibility that some codepoints above U+10FFFF
        may be assigned in the future, contrary to current policy).
       (OpenSSL may or may not accept the CESU-8 and Java
        Modified UTF-8 encoding variants and may or may not normalize
        those to real UTF-8 before further processing).

    PrintableString can only represent a specific small set of Unicode
       code points

    T61String can only represent a different specific small set.


Erwann Abalea

unread,
Dec 16, 2011, 11:23:52 AM12/16/11
to
man req
Then look for the "-utf8" argument.

I took your example below, added "-utf8" argument, and it worked.
You can display the content with "openssl req -text -noout -in blabla.pem -nameopt multiline,utf8,-esc_msb"


Le 16/12/2011 16:33, Lou Picciano a écrit :

openssl req -new -sha1 -nodes \
-nameopt multiline,show_type \
-keyout private/THORSTROM.key \
-out csrs/THORSTROM.csr \
-subj "/O=ESBJÖRN.com/OU=Esbjörn-Thörstrom Group/CN=Áki Thörstrom"

-- 
Erwann ABALEA
-----
vésicosufflochromateur: supérieur à 0,5 gramme

Mick

unread,
Dec 16, 2011, 11:57:49 AM12/16/11
to
On Friday 16 Dec 2011 16:23:52 you wrote:
> man req
> Then look for the "-utf8" argument.
>
> I took your example below, added "-utf8" argument, and it worked.
> You can display the content with "openssl req -text -noout -in
> blabla.pem -nameopt multiline,utf8,-esc_msb"

Would using -utf8 resolve the original OP problem?

--
Regards,
Mick
signature.asc

Erwann Abalea

unread,
Dec 16, 2011, 12:14:11 PM12/16/11
to
To create the request/certificate, yes.
This is what I do to embed accented characters in UTF8.

Typing

openssl req -utf8 -new -nodes -newkey rsa:512 -keyout THORSTROM.key -out THORSTROM.csr -subj "/O=ESBJÖRN.com/OU=Esbjörn-Thörstrom Group/CN=Áki Thörstrom"

on an UTF8 capable terminal, with a "string_mask = utf8only" in the right openssl.cnf file, gives me a certificate request correctly encoded in UTF8 with the wanted characters in the DN.

-- 
Erwann ABALEA
-----
minilactopotage: intense satisfaction

Jakob Bohm

unread,
Dec 16, 2011, 12:27:42 PM12/16/11
to
Sorry, but OP's problem seems to be that the CSR was created by "some
software embedded in a router", which presumably would not allow him
to set OpenSSL command line options, OpenSSL config file options or
even the terminal type, even if the software in the router happened to
be built around OpenSSL.

OPs problem is that the OpenSSL ca command is being overly strict in
its handling of policy constraints on DN name components, rejecting
alternative encodings of the same name with a meaningless error
message ("foo" does not match "foo") rather than accept those.

Mick

unread,
Dec 16, 2011, 12:45:44 PM12/16/11
to
Indeed, the message was rather esoteric and it did not offer a way out - e.g.
it could have advised to change "match" to "supplied" in openssl.cnf, or to
ensure that the encoding between the CSR and ca is the same.

I think what confused me is that by uploading the cacert to the router I would
expect the router to respect the cacert's encodings. It evidently did not.

Since I cannot change the router firmware, what should I change the
'string_mask = ' on the PC to agree with the router?
--
Regards,
Mick
signature.asc

Erwann Abalea

unread,
Dec 16, 2011, 12:47:07 PM12/16/11
to
Le 16/12/2011 16:29, Jakob Bohm a écrit :
> On 12/16/2011 3:22 PM, Erwann Abalea wrote:
>> Le 16/12/2011 15:07, Jakob Bohm a écrit :
>>> I think we may have a bug here, anyone from the core team
>>> wish to comment on this.
>>>
>>> The apparent bug:
>>>
>>> When enforcing the "match" policy for a DN part, openssl reports an
>>> error if the CSR has used a different string type for the field, but
>>> the
>>> correct value (The naively expected behavior is to realize the strings
>>> are identical and use the configured encoding for the resulting cert).
>>
>> Do you expect the "openssl ca" tool to apply the complete X.520
>> comparison rules before checking the policy?
> Not unless there are OpenSSL functions to do the work.
>
> Otherwise I just expect it to apply the character set conversions it
> uses for its
> other operations (such as reading the config file/displaying DNs).

Fair.
I personally use the openssl command line tools to have a quick CA, not
a full-featured one. The API is complete and allows to code this.
But you're right, it would be fair to have consistent behaviour.

>>> 3. Validating a certificate whose issuing CA certificate specifies path
>>> constraints where the issued certificate satisfies the path constraint
>>> except for the exact choice of string type.
>>
>> NameConstraints is a set of constraints imposed on the semantic value
>> of the name elements, not on their encoding (string type,
>> double-spacing, case differences, etc).
> The question was how the OpenSSL code (library and command line) handle
> the scenario, your answer seems to indicate that it is indeed supposed to
> compare the semantic character sequence, not the encoding.

That's what X.509 and X.520 impose. An algorithm is described in X.520
for name comparisons.

>> T.61 has no "well defined" bidirectional mapping with UTF8.
>> That said, T.61 was withdrawn before 1993 (IIRC) and shouldn't be used.
>>
> According to RFC1345, T.61 has a well defined mapping to named
> characters also found in UNICODE. Some of those are encoded
> as two bytes in T.61 (using a modifier+base char scheme), the
> rest as one byte. That is what I mean by a bidirectional mapping
> to a small (sub)set of UNICODE, meaning that most UNICODE
> code points cannot be mapped to T.61, but the rest have a
> bidirectional mapping.

I'm not finished with the reading of T.61 (1988 edition), but here's
what I found:
- 0xA6 is the '#' character, 0xA8 is the '¤' character (generic
currency), but those characters can also be obtained with 0x23 and 0x24,
respectively (Figure 2, note 4). Later in the same document, 0x23 and
0x24 are declared as "not used". This is both ambiguous and not
bidirectional.
- 0x7F and 0xFF are not defined, and are not defined as "not used".
- 0xC9 was the umlaut diacritical mark in the 1980 edition, which is
still tolerated in the 1988 edition, but the tables don't clearly define
0xC9 (and again, don't define it as "not used"). 0xC8 is declared as
"diaresis or umlaut mark". As I don't have the 1980 edition, I don't
know if it was already the case.
- nothing is said if a diacritical mark is encoded without a base
character.

These are ambiguities.

Annexes define control sequences (longer that 2 bytes), graphical
characters, configurable character sets, presentation functions
(selection of page format, character sizes and attributes
(bold/italic/underline), line settings (vertical and horizontal
spacing)). I doubt everything can be mapped to UTF8.

> Constructing a mapping table from the data in RFC1345 or other
> sources is left as an exercise for the reader (cheat hint: Maybe
> IBM included such a table in ICU or unicode.org included one in
> its data files).

I think only a subset of T.61 is taken into consideration. But I haven't
looked at the hinted files.

--
Erwann ABALEA
-----
D'abord, on est sur le web, pas sur ce usenet dont on nous rabbache les
oreilles et qui n'est qu'une abstraction.
-+- JP in http://neuneu.ctw.cc - Neuneu en abstract mode -+-

Lou Picciano

unread,
Dec 16, 2011, 12:56:44 PM12/16/11
to
OK, Jakob - will try this. Tks for the feedback. (Seems we'd tried the 'utf8' option inline already, but will try again). and my 'read' of the -nameopt multiline config was that utf8 would be included, in absence of its specific de-activation, such as with the -utf8 command.


Lou Picciano


From: "Jakob Bohm" <jb-op...@wisemo.com>
To: openss...@openssl.org
Sent: Friday, December 16, 2011 12:27:42 PM
Subject: Re: [openssl-users] Re: stateOrProvinceName field problem when signing CSR

Erwann Abalea

unread,
Dec 16, 2011, 1:04:49 PM12/16/11
to
Le 16/12/2011 18:27, Jakob Bohm a écrit :
> On 12/16/2011 6:14 PM, Erwann Abalea wrote:
>> Le 16/12/2011 17:57, Mick a écrit :
>>> On Friday 16 Dec 2011 16:23:52 you wrote:
>>>> man req
>>>> Then look for the "-utf8" argument.
>>>>
>>>> I took your example below, added "-utf8" argument, and it worked.
>>>> You can display the content with "openssl req -text -noout -in
>>>> blabla.pem -nameopt multiline,utf8,-esc_msb"
>>> Would using -utf8 resolve the original OP problem?
>>
>> To create the request/certificate, yes.
>> This is what I do to embed accented characters in UTF8.
>>
>> Typing
>>
>> openssl req -utf8 -new -nodes -newkey rsa:512 -keyout THORSTROM.key
>> -out THORSTROM.csr -subj "/O=ESBJÖRN.com/OU=Esbjörn-Thörstrom
>> Group/CN=Áki Thörstrom"
>>
>> on an UTF8 capable terminal, with a "string_mask = utf8only" in the
>> right openssl.cnf file, gives me a certificate request correctly
>> encoded in UTF8 with the wanted characters in the DN.
> Sorry, but OP's problem seems to be that the CSR was created by "some
> software embedded in a router",

Sorry, I replied to the problem described by Lou Picciano, and forgot
that Mick was the OP. My fault.

--
Erwann ABALEA
-----
Le netétiquette n'est qu'une vaste fumisterie,il faut de l'argent pour
fonctionner,à force,en France de refuser tout rapport sain avec
l'argent,l'on riqsque de tuer ce nouvel outil.
-+- AA in: Guide du Neuneu d'Usenet - Le netétiquette du riche -+-

Jakob Bohm

unread,
Dec 16, 2011, 1:07:55 PM12/16/11
to
I understand, but does OpenSSL implement that?
>
>>> T.61 has no "well defined" bidirectional mapping with UTF8.
>>> That said, T.61 was withdrawn before 1993 (IIRC) and shouldn't be used.
>>>
>> According to RFC1345, T.61 has a well defined mapping to named
>> characters also found in UNICODE. Some of those are encoded
>> as two bytes in T.61 (using a modifier+base char scheme), the
>> rest as one byte. That is what I mean by a bidirectional mapping
>> to a small (sub)set of UNICODE, meaning that most UNICODE
>> code points cannot be mapped to T.61, but the rest have a
>> bidirectional mapping.
>
> I'm not finished with the reading of T.61 (1988 edition), but here's
> what I found:
> - 0xA6 is the '#' character, 0xA8 is the '¤' character (generic
> currency), but those characters can also be obtained with 0x23 and
> 0x24, respectively (Figure 2, note 4). Later in the same document,
> 0x23 and 0x24 are declared as "not used". This is both ambiguous and
> not bidirectional.
As you quote it (I don't have a copy), this sounds like using 0x23
and 0x24 should not be done when encoding, but should be accepted
when decoding.
> - 0x7F and 0xFF are not defined, and are not defined as "not used".
RFC1345 seems to indicate that 0x7F maps to U+007F DEL
> - 0xC9 was the umlaut diacritical mark in the 1980 edition, which is
> still tolerated in the 1988 edition, but the tables don't clearly
> define 0xC9 (and again, don't define it as "not used"). 0xC8 is
> declared as "diaresis or umlaut mark". As I don't have the 1980
> edition, I don't know if it was already the case.
> - nothing is said if a diacritical mark is encoded without a base
> character.
RFC1346 seems to indicate that certain diacritical marks must always be
followed by a base character (which may be 0x20 space), the others never.
This is consistent with the mechanical behavior of mechanical teletypes
and typewriters: Diacritics are implemented as overtyping "dead keys"
that place the diacritic on the paper but do not advance the print head,
thus causing the next character to be combined with it.
>
> These are ambiguities.
>
> Annexes define control sequences (longer that 2 bytes), graphical
> characters, configurable character sets, presentation functions
> (selection of page format, character sizes and attributes
> (bold/italic/underline), line settings (vertical and horizontal
> spacing)). I doubt everything can be mapped to UTF8.
Most of those would be inapplicable to the encoding of X.500 strings,
configurable
character sets sounds like an ISO-2022 like mechanism useful for
encoding an even
larger subset of UNICODE, as do graphical characters.

However none of those features were mentioned in the still available
secondary
references I looked at (RFC1345 and Wikipedia), so they are unlikely to be
accepted nor emitted by any current implementations of T.61.
>
>> Constructing a mapping table from the data in RFC1345 or other
>> sources is left as an exercise for the reader (cheat hint: Maybe
>> IBM included such a table in ICU or unicode.org included one in
>> its data files).
>
> I think only a subset of T.61 is taken into consideration. But I
> haven't looked at the hinted files.
>
Sounds like it. RFC1345 is a historic listing of character sets encountered
on the European part of the early Internet, with machine readable tables of
each such encoding in terms of two-character abbreviations from a historic
ISO standard (fortunately re-documented within the RFC itself).

RFC1345 obviously predates both the IANA charset registry and the other
current catalogs of character set encodings.

Erwann Abalea

unread,
Dec 16, 2011, 1:31:01 PM12/16/11
to
Le 16/12/2011 18:45, Mick a écrit :
[...]
> Indeed, the message was rather esoteric and it did not offer a way out - e.g.
> it could have advised to change "match" to "supplied" in openssl.cnf, or to
> ensure that the encoding between the CSR and ca is the same.
>
> I think what confused me is that by uploading the cacert to the router I would
> expect the router to respect the cacert's encodings. It evidently did not.

It doesn't need to :)

> Since I cannot change the router firmware, what should I change the
> 'string_mask = ' on the PC to agree with the router?

My understanding is that string_mask is used when producing an object
(request or certificate), not when checking its content with the policy
match directives.

You could either regenerate your CA with string_mask set to "default"
(which means: first try "PrintableString", then "T61String", then
"BMPString"). Your router uses PrintableString for pretty much anything
except commonName, which is encoded in T61String. That could work.

Or you could keep your CA certificate as is, change the policy
directives (from "match" to "supplied"), and manually check the requests.

Or code something with "openssl req -text -nameopt
multiline,utf8,-esc_msb ...", extracting the RDNs, comparing with what
is set in the CA certificate (the "-nameopt ..." argument will convert
everything into UTF8, easing the comparison), whence performing your own
validation.

--
Erwann ABALEA
-----
> Désolé.
Ta gueule.
-+- LC in : Guide du Neuneu Usenet - Neuneu exaspère le dino -+-

Erwann Abalea

unread,
Dec 16, 2011, 1:55:45 PM12/16/11
to
Le 16/12/2011 19:07, Jakob Bohm a écrit :
> On 12/16/2011 6:47 PM, Erwann Abalea wrote:
>> Le 16/12/2011 16:29, Jakob Bohm a écrit :
>>> On 12/16/2011 3:22 PM, Erwann Abalea wrote:
>>>> NameConstraints is a set of constraints imposed on the semantic
>>>> value of the name elements, not on their encoding (string type,
>>>> double-spacing, case differences, etc).
>>> The question was how the OpenSSL code (library and command line) handle
>>> the scenario, your answer seems to indicate that it is indeed
>>> supposed to
>>> compare the semantic character sequence, not the encoding.
>>
>> That's what X.509 and X.520 impose. An algorithm is described in
>> X.520 for name comparisons.
> I understand, but does OpenSSL implement that?

In the API, yes. At least in 1.0.0 branch, which passes the NIST PKITS
suite.

>> I'm not finished with the reading of T.61 (1988 edition), but here's
>> what I found:
>> - 0xA6 is the '#' character, 0xA8 is the '¤' character (generic
>> currency), but those characters can also be obtained with 0x23 and
>> 0x24, respectively (Figure 2, note 4). Later in the same document,
>> 0x23 and 0x24 are declared as "not used". This is both ambiguous and
>> not bidirectional.
> As you quote it (I don't have a copy), this sounds like using 0x23
> and 0x24 should not be done when encoding, but should be accepted
> when decoding.

Yes, and that also means those characters cannot be obtained with "7-bit
T.61", contrary to the table found in RFC1345.
In fact, there's no "7-bit T.61" set, so I don't really know how RFC1345
should be treated.

>> - 0x7F and 0xFF are not defined, and are not defined as "not used".
> RFC1345 seems to indicate that 0x7F maps to U+007F DEL

This mapping (0x7F - DEL) is only presented in Annex E, discussing the
greek primary character set. But the Table 2, which exhaustively lists
the codes, avoids 0x7F (07/15, really).

Some PKI toolkits use T.61 to encode ISO8859-1 characters, and ISO8859-1
defines 0x7F as "DEL".

>> Annexes define control sequences (longer that 2 bytes), graphical
>> characters, configurable character sets, presentation functions
>> (selection of page format, character sizes and attributes
>> (bold/italic/underline), line settings (vertical and horizontal
>> spacing)). I doubt everything can be mapped to UTF8.
> Most of those would be inapplicable to the encoding of X.500 strings,
> configurable
> character sets sounds like an ISO-2022 like mechanism useful for
> encoding an even
> larger subset of UNICODE, as do graphical characters.
>
> However none of those features were mentioned in the still available
> secondary
> references I looked at (RFC1345 and Wikipedia), so they are unlikely
> to be
> accepted nor emitted by any current implementations of T.61.

Sure. But those are valid T.61 sequences anyway.

As you said, RFC1345 lists historic character sets, and T.61 is one of
them (if predates Unicode).
T.61 was ambiguous, was defined for a now obsolete system (Teletex), was
more than a simple character set (you could redefine graphical
characters, and specify formatting), and is now withdrawn since nearly 2
decades. It's time to avoid it :)

--
Erwann ABALEA
-----
préhibernoluthidolichospasmes: sanglots longs des violons de l'automne, phénomène météomusical aux propriétés homéoanémicardiomutilatoires, décrit pour la première fois par Verlaine en 1866

Lou Picciano

unread,
Dec 16, 2011, 4:05:24 PM12/16/11
to
Yes, and Thank You both for doing so!

While we're at it, I am reminded of another one we've found - not terribly important, but worth a look:

In using this option: '-enddate 140615235959Z' when signing a CSR, the cert is created correctly, expiring in 2014. However, the user prompt indicates it expires in '365 days' - in fact, I've never seen it prompt with any number larger than 365 days!

Not a huge problem, but...

Lou Picciano


From: "Erwann Abalea" <erwann...@keynectis.com>
To: openss...@openssl.org
Cc: "Jakob Bohm" <jb-op...@wisemo.com>
Sent: Friday, December 16, 2011 1:04:49 PM

Subject: Re: [openssl-users] Re: stateOrProvinceName field problem when signing CSR

Le 16/12/2011 18:27, Jakob Bohm a écrit :

Mick

unread,
Dec 18, 2011, 1:10:55 PM12/18/11
to
On Friday 16 Dec 2011 18:31:01 you wrote:
> Le 16/12/2011 18:45, Mick a écrit :
> [...]

> > Since I cannot change the router firmware, what should I change the
> > 'string_mask = ' on the PC to agree with the router?
>
> My understanding is that string_mask is used when producing an object
> (request or certificate), not when checking its content with the policy
> match directives.

That's fine as far as openssl usage is concerned, but when the standalone
router compares the client certificate submitted to it, it fails with a
malformed type error (16). So, I'm led to believe that I should try creating
a CA that uses a default string_mask to align that with the way the router
parses the RDNs and sign both router and client certificates with it.


> You could either regenerate your CA with string_mask set to "default"
> (which means: first try "PrintableString", then "T61String", then
> "BMPString"). Your router uses PrintableString for pretty much anything
> except commonName, which is encoded in T61String. That could work.

Perhaps I am being dense ... but I can't find which section I should be
specifying this option under, in the openssl.cnf file. I tried placing it
under [ req ] as well as other sections and the produced cacert Subject fields
always get encoded it in utf8 (except for Country which stays as
PrintableString) :(

--
Regards,
Mick
signature.asc

Mick

unread,
Dec 19, 2011, 1:45:13 AM12/19/11
to
Oops! Scratch that! I had forgotten to point it to the correct openssl.cnf
file! O_O

OK, I'm almost there ... the only difference now between the router and my PKI
is the commonName. The router has T61String while my cacert comes out as
PrintableString. How can I change a single RDN?
--
Regards,
Mick
signature.asc

Mick

unread,
Dec 19, 2011, 2:09:49 AM12/19/11
to
Aha! Just tried signing the CSR and the commonName is actually created as a
T61String!

Thank you very much for your help and sorry for the noise. :-)
--
Regards,
Mick
signature.asc

Hasan, Rezaul (NSN - US/Arlington Heights)

unread,
Dec 20, 2011, 6:18:50 PM12/20/11
to
Hello All,

We have openssl 0.9.8r on our Linux Server.

A Nessus security scan on our Linux server tells us that we may be
vulnerable to a potential DOS due to SSL/TLS Renegotiation
Vulnerability [CVE-2011-1473].

The suggestions of mitigating these (we believe) are:

1. Disable Re-Negotiation completely. {We CANNOT use this choice,
because our system does need to allow Re-Negotiation in some cases. So
NOT an option for us}

2. "Rate-Limit" Re-Negotiations.

Can someone please provide detailed information/guidance about exactly
how to go about "Rate-Limiting" Re-Negotiation requests on the Linux
Server? Pointing to a detailed article would also be helpful.

Thanks a bunch in advance.

Jakob Bohm

unread,
Dec 21, 2011, 1:23:35 PM12/21/11
to
On 12/21/2011 12:18 AM, Hasan, Rezaul (NSN - US/Arlington Heights) wrote:
> Hello All,
You will have a much better chance of getting an answer
if you don't use the "Reply" button to start a new discussion.
Most readers of this list/forum use software which groups
together replies under the message you replied to, so the
following questions of yours ended up deep inside a
discussion on subtle issues in the handling of names
in certificates.
> We have openssl 0.9.8r on our Linux Server.
Are you sure it is really a real version "0.9.8r"? it could also
be "0.9.8r" + later security fixes backported to work with
version "0.9.8r" by your Linux vendor.
> A Nessus security scan on our Linux server tells us that we may be
> vulnerable to a potential DOS due to SSL/TLS Renegotiation
> Vulnerability [CVE-2011-1473].
Renegotiation vulnerabilities are notorious for being
impossible to detect reliably from the outside, this
may or may not be a false alarm.
> The suggestions of mitigating these (we believe) are:
>
> 1. Disable Re-Negotiation completely. {We CANNOT use this choice,
> because our system does need to allow Re-Negotiation in some cases. So
> NOT an option for us}
What server software is this? Is it Apache httpd?
Some other software?
> 2. "Rate-Limit" Re-Negotiations.
>
> Can someone please provide detailed information/guidance about exactly
> how to go about "Rate-Limiting" Re-Negotiation requests on the Linux
> Server? Pointing to a detailed article would also be helpful.
In general, Rate limiting is done by having a function in your
software called whenever something happens (such as
Renegotiation) and inside that function, keep a count of how
many times it was called in each of the past X minutes. If
the total is over the relevant limit for this vulnerability, then
either delay the operation by enough time to stay under the
limit or artificially fail the operation in such a way that the
remote client (which may be an enemy) cannot tell anything
about what happened in that renegotiation, such as if it
would have succeeded if there was no rate limit.

Unfortunately, I do not know the specifics of this vulnerability or
how low the "safe" rate would be.


3. There is a 3rd option: Actually install or create a proper fix
for the vulnerability, thus eliminating the need for workarounds.

Hopefully, others on this list knows more about this issue and
can tell you what is needed to be safe.

Mick

unread,
Dec 26, 2011, 2:00:32 PM12/26/11
to
On Friday 16 Dec 2011 18:31:01 you wrote:
> Le 16/12/2011 18:45, Mick a écrit :

> > Since I cannot change the router firmware, what should I change the
> > 'string_mask = ' on the PC to agree with the router?
>
> My understanding is that string_mask is used when producing an object
> (request or certificate), not when checking its content with the policy
> match directives.
>
> You could either regenerate your CA with string_mask set to "default"
> (which means: first try "PrintableString", then "T61String", then
> "BMPString"). Your router uses PrintableString for pretty much anything
> except commonName, which is encoded in T61String. That could work.

I seem to have overcome the original problem. Now both the cacert and signed
client certificates are formatted in the same way. I used -policy
policy_anything to avoid complaints from openssl ca.

Unfortunately the problem of authenticating on the VPN gateway remains. :-(

I would be grateful for some advice, as I am not sure if I am following the
correct steps. I have created a request for a client certificate:

==========================================
openssl req -config ./openssl_VPN.cnf -new -newkey rsa:2048 -keyout
VPN_test_key.pem -days 1095 -out VPN_test_cert.req
==========================================


Then signed it with the cacert:

==========================================
openssl ca -config ./openssl_VPN.cnf -extensions usr_cert -days 1095 -cert
cacert_VPN.pem -keyfile VPN_CA/private/cakey_VPN.pem -policy policy_anything -
infiles VPN_test_cert.req
Using configuration from ./openssl_VPN.cnf
Enter pass phrase for VPN_CA/private/cakey_VPN.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
Serial Number: 3 (0x3)
Validity
Not Before: Dec 26 18:13:18 2011 GMT
Not After : Dec 25 18:13:18 2014 GMT
Subject:
countryName = GB
organizationName = Sundial
commonName = VPN_test_XPS
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Subject Key Identifier:
E6:95:82:48:3D:E8:3D:9E:0C:BA:CF:3A:EC:FF:8D:0D:E0:6A:1B:2B
X509v3 Authority Key Identifier:
keyid:CA:91:0A:ED:F9:B5:F4:F7:60:C5:A7:1C:0B:75:94:5C:33:38:F1:AB

X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
Certificate is to be certified until Dec 25 18:13:18 2014 GMT (1095 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 3 (0x3)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=GB, O=Sundial, CN=Sundial_VPN_CA
Validity
Not Before: Dec 26 18:13:18 2011 GMT
Not After : Dec 25 18:13:18 2014 GMT
Subject: C=GB, O=Sundial, CN=VPN_test_XPS
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:df:d4:74:bc:de:21:bd:61:99:4c:88:97:0a:43:
3f:c0:40:01:73:b8:41:ce:47:46:fd:14:0f:83:6d:
75:54:bc:73:45:f2:99:24:1e:51:f1:d9:b6:8f:9b:
bf:e5:e5:93:00:79:a8:56:38:04:e2:06:69:5a:1e:
29:16:72:25:5e:bb:1a:2d:e0:82:90:b2:46:78:b5:
8d:e7:ce:6a:f7:9e:f4:6a:30:4e:da:db:09:17:ba:
78:d0:03:c5:22:ad:1d:73:61:82:81:ce:d1:15:1a:
dd:66:76:22:d0:4f:a6:23:13:f1:b7:d0:67:57:28:
e7:bb:25:87:57:04:c6:c3:4f:f1:56:c1:b4:12:05:
7d:3a:9c:14:88:5e:8c:df:49:08:69:2c:00:8a:db:
d6:20:e5:f6:4d:66:38:a3:c9:f5:9d:f4:b8:24:03:
11:67:75:3c:c7:f1:75:35:dc:86:9f:e9:98:04:c7:
ba:8f:64:b8:58:64:49:27:e4:c1:25:0f:00:4e:ad:
7c:14:3b:38:1b:4d:fc:47:de:d4:a4:48:1c:81:89:
20:f5:8e:ad:2b:e2:91:51:c1:db:b3:8f:86:17:fc:
61:49:4e:03:b1:8d:97:2d:06:b4:10:51:20:78:9e:
c2:3d:5f:a8:83:a3:8e:6b:39:64:2a:ac:7a:f7:4e:
31:11
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Subject Key Identifier:
E6:95:82:48:3D:E8:3D:9E:0C:BA:CF:3A:EC:FF:8D:0D:E0:6A:1B:2B
X509v3 Authority Key Identifier:
keyid:CA:91:0A:ED:F9:B5:F4:F7:60:C5:A7:1C:0B:75:94:5C:33:38:F1:AB

X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
Signature Algorithm: sha1WithRSAEncryption
55:3b:d7:52:91:70:a2:ec:8f:ff:db:ca:1b:8c:b5:73:34:10:
e8:18:3f:4f:5a:f5:75:88:99:86:a6:e8:3d:1b:2d:8c:d2:ae:
ba:e0:94:f8:a5:65:c8:4e:0e:73:e7:56:58:27:86:6e:ce:60:
df:b1:f6:6b:f5:f6:bf:29:71:af:29:f7:c7:cd:fe:26:86:04:
46:bc:89:9c:59:aa:92:d2:e4:94:f6:e0:13:ca:1c:98:33:24:
c9:aa:5e:00:c3:f9:d2:3f:7d:8f:b7:69:07:c4:f2:ea:d6:8f:
5e:10:0f:0b:27:26:b4:a5:65:30:d6:f5:c5:50:0e:b5:69:7a:
2e:ff:74:3f:35:51:c6:ac:e1:cb:31:b9:e5:80:69:53:4c:c4:
80:98:98:b4:ac:2b:cf:f6:39:96:86:3c:d4:48:da:9f:c5:62:
e0:95:d2:38:75:8f:5d:e8:55:bb:ea:6f:3c:2a:79:da:a5:89:
dd:2d:0d:a0:70:08:e1:27:19:3c:bc:e1:79:78:91:48:2c:dd:
7a:77:df:2b:ff:6f:19:ee:ab:97:12:e8:e9:81:2d:0a:04:69:
da:ab:32:51:e4:62:09:cb:80:e1:71:b5:63:c5:35:2d:ba:a9:
1f:a5:b2:43:ca:cd:1c:e2:c2:89:70:72:62:ff:8e:d1:c6:93:
d8:6a:49:4f
[snip ...]
==========================================


However, trying to verify it brings up some errors:
==========================================
openssl verify -verbose -CAfile cacert_VPN.pem -x509_strict -policy_print -
issuer_checks VPN_test_cert.pem
VPN_test_cert.pem: C = GB, O = Sundial, CN = VPN_test_XPS
error 29 at 0 depth lookup:subject issuer mismatch
C = GB, O = Sundial, CN = VPN_test_XPS
error 29 at 0 depth lookup:subject issuer mismatch
C = GB, O = Sundial, CN = VPN_test_XPS
error 29 at 0 depth lookup:subject issuer mismatch
OK
==========================================


and the asn1parser fails too:
==========================================
openssl asn1parse -in VPN_test_cert.pem
Error in encoding
139747192850088:error:0D07207B:asn1 encoding routines:ASN1_get_object:header
too long:asn1_lib.c:150:
==========================================

The cacert does not suffer from such verification or parsing errors, but
certificates signed by it, do.

The errors that the router authentication shows are:
===================================
CRYPTO_IKE.NEGOTIATION IkeInCertProcess: No certreq/cert data : 5
CRYPTO_IKE.NEGOTIATION IkeProcessPayloads :: IkeInCertReqProcess failed
CRYPTO_IKE.NEGOTIATION 100: IkeProcessPayloads failed
CRYPTO_IKE.NEGOTIATION IkeProcessData: IkeIdleProcess failed
CRYPTO_IKE.NEGOTIATION SENDING NOTIFY MSG:
CRYPTO_IKE.NEGOTIATION PAYLOAD_MALFORMED
CRYPTO_IKE.NEGOTIATION <POLICY: 100> PAYLOADS: NOTIFY
CRYPTO_IKE.NEGOTIATION NOTIFY PAYLOAD
CRYPTO_IKE.NEGOTIATION DOI: 0
CRYPTO_IKE.NEGOTIATION Protocol Id: 1
CRYPTO_IKE.NEGOTIATION Size of SPI: 16
CRYPTO_IKE.NEGOTIATION Type of notify message: 16
CRYPTO_IKE.NEGOTIATION Notify Type: Payload Malformed (16)
CRYPTO_IKE.NEGOTIATION Length of Notification Data: 0
CRYPTO_IKE.NEGOTIATION 100: Sent informational exchange message
===================================


I suspect that the malformed payload complaint is an encoding problem - but
I'm not sure. I get verification and parsing problems with the signed router
certificate, although the router accepted it and loaded it without complaints:
===================================
openssl asn1parse -in router_VPN.pem
Error: offset too large
===================================

Is there a way to overcome the above errors?
--
Regards,
Mick
signature.asc

Dave Thompson

unread,
Dec 28, 2011, 11:46:26 PM12/28/11
to
> From: owner-ope...@openssl.org On Behalf Of Mick
> Sent: Monday, 26 December, 2011 14:01

<snip: CA-vs-EE DN string types>

> I seem to have overcome the original problem. Now both the
> cacert and signed
> client certificates are formatted in the same way. I used -policy
> policy_anything to avoid complaints from openssl ca.
>
> Unfortunately the problem of authenticating on the VPN
> gateway remains. :-(
>
> I would be grateful for some advice, as I am not sure if I am
> following the
> correct steps. I have created a request for a client certificate:
>
> ==========================================
> openssl req -config ./openssl_VPN.cnf -new -newkey rsa:2048 -keyout
> VPN_test_key.pem -days 1095 -out VPN_test_cert.req
> ==========================================
>
Aside: for req -new without -x509, -days is ignored and useless.
>
> Then signed it with the cacert:
>
Nits: it isn't actually the request that's signed and the CI
isn't actually signed with the cert, but we know what you mean.

> ==========================================
> openssl ca -config ./openssl_VPN.cnf -extensions usr_cert
> -days 1095 -cert
> cacert_VPN.pem -keyfile VPN_CA/private/cakey_VPN.pem -policy
> policy_anything -
> infiles VPN_test_cert.req
<snip>

> However, trying to verify it brings up some errors:
> ==========================================
> openssl verify -verbose -CAfile cacert_VPN.pem -x509_strict
> -policy_print -
> issuer_checks VPN_test_cert.pem
> VPN_test_cert.pem: C = GB, O = Sundial, CN = VPN_test_XPS
> error 29 at 0 depth lookup:subject issuer mismatch
> C = GB, O = Sundial, CN = VPN_test_XPS
> error 29 at 0 depth lookup:subject issuer mismatch
> C = GB, O = Sundial, CN = VPN_test_XPS
> error 29 at 0 depth lookup:subject issuer mismatch
> OK
> ==========================================
>
-issuer_checks can be misleading; these "errors"
are the results of internal tests for a root cert
(i.e. issued by itself) and thus quite normal.
Since the final result is OK, OpenSSL is happy.

>
> and the asn1parser fails too:
> ==========================================
> openssl asn1parse -in VPN_test_cert.pem
> Error in encoding
> 139747192850088:error:0D07207B:asn1 encoding
> routines:ASN1_get_object:header
> too long:asn1_lib.c:150:
> ==========================================
>
Make sure you asn1parse a file/input containing ONLY
valid data (here dashed-BEGIN, b64 cert, dashed-END).
All(?) other openssl PEM functions accept and ignore
comments or "garbage" before BEGIN or after END, but
not asn1parse. And some openssl functions including ca
PUT such comments. You can avoid editing a copy by:
awk '/-BEGIN/,/-END/' filewithextra | openssl asn1parse
on any *nix, and on Windows if you add an awk port.

> The cacert does not suffer from such verification or parsing
> errors, but
> certificates signed by it, do.
>
> The errors that the router authentication shows are:
<snip>

But as far as pleasing your router, I have no clue, sorry.

Mick

unread,
Dec 29, 2011, 5:03:01 AM12/29/11
to
On Thursday 29 Dec 2011 04:46:26 you wrote:
> > From: owner-ope...@openssl.org On Behalf Of Mick
> > Sent: Monday, 26 December, 2011 14:01
>
> <snip: CA-vs-EE DN string types>
>
> > I seem to have overcome the original problem. Now both the
> > cacert and signed
> > client certificates are formatted in the same way. I used -policy
> > policy_anything to avoid complaints from openssl ca.
> >
> > Unfortunately the problem of authenticating on the VPN
> > gateway remains. :-(
> >
> > I would be grateful for some advice, as I am not sure if I am
> > following the
> > correct steps. I have created a request for a client certificate:
> >
> > ==========================================
> >
> > openssl req -config ./openssl_VPN.cnf -new -newkey rsa:2048 -keyout
> >
> > VPN_test_key.pem -days 1095 -out VPN_test_cert.req
> > ==========================================
>
> Aside: for req -new without -x509, -days is ignored and useless.

Ah! Thanks I didn't know this. I thought that the CLI options prevail in any
case.

> > ==========================================
> > openssl ca -config ./openssl_VPN.cnf -extensions usr_cert
> > -days 1095 -cert
> > cacert_VPN.pem -keyfile VPN_CA/private/cakey_VPN.pem -policy
> > policy_anything -
> > infiles VPN_test_cert.req
>
> <snip>
>
> > However, trying to verify it brings up some errors:
> > ==========================================
> > openssl verify -verbose -CAfile cacert_VPN.pem -x509_strict
> > -policy_print -
> > issuer_checks VPN_test_cert.pem
> > VPN_test_cert.pem: C = GB, O = Sundial, CN = VPN_test_XPS
> > error 29 at 0 depth lookup:subject issuer mismatch
> > C = GB, O = Sundial, CN = VPN_test_XPS
> > error 29 at 0 depth lookup:subject issuer mismatch
> > C = GB, O = Sundial, CN = VPN_test_XPS
> > error 29 at 0 depth lookup:subject issuer mismatch
> > OK
> > ==========================================
>
> -issuer_checks can be misleading; these "errors"
> are the results of internal tests for a root cert
> (i.e. issued by itself) and thus quite normal.
> Since the final result is OK, OpenSSL is happy.

OK, its good to know that openssl is happy.


> > and the asn1parser fails too:
> > ==========================================
> > openssl asn1parse -in VPN_test_cert.pem
> > Error in encoding
> > 139747192850088:error:0D07207B:asn1 encoding
> > routines:ASN1_get_object:header
> > too long:asn1_lib.c:150:
> > ==========================================
>
> Make sure you asn1parse a file/input containing ONLY
> valid data (here dashed-BEGIN, b64 cert, dashed-END).
> All(?) other openssl PEM functions accept and ignore
> comments or "garbage" before BEGIN or after END, but
> not asn1parse. And some openssl functions including ca
> PUT such comments. You can avoid editing a copy by:
> awk '/-BEGIN/,/-END/' filewithextra | openssl asn1parse
> on any *nix, and on Windows if you add an awk port.

Just tried this and all certs are parsed correctly. So clearly the router is
not chocking on an encoding error.


> > The cacert does not suffer from such verification or parsing
> > errors, but certificates signed by it, do.
>
> > The errors that the router authentication shows are:
> <snip>
>
> But as far as pleasing your router, I have no clue, sorry.

Thank you Dave. Your comments have been most informative. I have raised a
support request with the router manufacturer and wait to see what they come up
with.
--
Regards,
Mick

Mick

unread,
Dec 29, 2011, 7:38:23 PM12/29/11
to
Just an idea ... Could it be that the router is expecting some explicit
"keyUsage:" extensions on the cacert? If so what should I try?
0 new messages