Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Mozilla RSA-PSS policy

969 views
Skip to first unread message

Hubert Kario

unread,
Nov 23, 2017, 7:08:10 AM11/23/17
to dev-secur...@lists.mozilla.org
In response to comment made by Gervase Markham[1], pointing out that Mozilla
doesn't have an official RSA-PSS usage policy.

This is the thread to discuss it and make a proposal that could be later
included in Mozilla Root Store Policy[2]

I'm proposing the following additions to the Policy (leaving out exactly which
sections this needs to be added, as that's better left for the end of
discussion):

- RSA keys can be used to make RSASSA-PKCS#1 v1.5 or RSASSA-PSS signatures on
issued certificates
- certificates containing RSA parameters can be limited to perform RSASSA-PSS
signatures only by specifying the X.509 Subject Public Key Info algorithm
identifier to RSA-PSS algorithm
- end-entity certificates must not include RSA-PSS parameters in the Public
Key Info Algorithm Identifier - that is, they must not be limited to creating
signatures with only one specific hash algorithm
- issuing certificates may include RSA-PSS parameters in the Public Key Info
Algorithm Identifier, it's recommended that the hash selected matches the
security of the key
- signature hash and the hash used for mask generation must be the same both
in public key parameters in certificate and in signature parameters
- the salt length must equal at least 32 for SHA-256, 48 for SHA-384 and 64
bytes for SHA-512
- SHA-1 and SHA-224 are not acceptable for use with RSA-PSS algorithm

1 - https://bugzilla.mozilla.org/show_bug.cgi?id=1400844#c15
2 - https://www.mozilla.org/en-US/about/governance/policies/security-group/
certs/policy/
--
Regards,
Hubert Kario
Senior Quality Engineer, QE BaseOS Security team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 115, 612 00 Brno, Czech Republic
signature.asc

Jakob Bohm

unread,
Nov 23, 2017, 2:23:09 PM11/23/17
to mozilla-dev-s...@lists.mozilla.org
On 23/11/2017 13:07, Hubert Kario wrote:
> In response to comment made by Gervase Markham[1], pointing out that Mozilla
> doesn't have an official RSA-PSS usage policy.
>
> This is the thread to discuss it and make a proposal that could be later
> included in Mozilla Root Store Policy[2]
>
> I'm proposing the following additions to the Policy (leaving out exactly which
> sections this needs to be added, as that's better left for the end of
> discussion):
>
> - RSA keys can be used to make RSASSA-PKCS#1 v1.5 or RSASSA-PSS signatures on
> issued certificates

I presume this refers to "CA RSA keys"

> - certificates containing RSA parameters can be limited to perform RSASSA-PSS
> signatures only by specifying the X.509 Subject Public Key Info algorithm
> identifier to RSA-PSS algorithm

I presume that "RSA parameters" is not the same as "RSA public key".

For clarity, please specify what X.509 fields or extensions "RSA
parameters" would be contained in. (Since other past or future
extensions might contain information about other keys that should not be
subject to this policy merely due to its wording).

Also specify any RSA related OIDs numerically to resolve the ambiguity
caused by the historic assignment of competing OIDs for RSA and hash+RSA
combinations.

Also clarify (or refer to a specific existing standard) if and how such
specification would restrict the hash algorithms that can be used with
the subject public key by the certificate holder.

> - end-entity certificates must not include RSA-PSS parameters in the Public
> Key Info Algorithm Identifier - that is, they must not be limited to creating
> signatures with only one specific hash algorithm

This might or might not need to change in the future, e.g. if attacks
against RSASSA-PKCS#1.5 become more significant than e.g. quantum
attacks on RSA public keys.

> - issuing certificates may include RSA-PSS parameters in the Public Key Info
> Algorithm Identifier, it's recommended that the hash selected matches the
> security of the key

The relationship between hash strength and RSA strength is very much a
mixture of current attack speeds, estimates and conflicting opinions.
Especially for CA certificates, it would probably be best practice to
use the largest sizes compatible with the systems that will use and/or
trust the certificates (not to be confused with the systems relying on
any one root store).

> - signature hash and the hash used for mask generation must be the same both
> in public key parameters in certificate and in signature parameters

Please make sure that cross-signing between hash algorithms that are
trusted at any given time are allowed. For example if an included root
certificate is RSA-PSS with a declared restriction to use only SHA-256,
it should still be able to cross sign an included or non-included
SHA-384 root.

> - the salt length must equal at least 32 for SHA-256, 48 for SHA-384 and 64
> bytes for SHA-512

Rephrase as "the salt length must equal the output length of the hash
algorithm, e.g. 48 bytes for SHA-384". This would also cover SHA3 and
other "future" algorithms.

> - SHA-1 and SHA-224 are not acceptable for use with RSA-PSS algorithm
>
> 1 - https://bugzilla.mozilla.org/show_bug.cgi?id=1400844#c15
> 2 - https://www.mozilla.org/en-US/about/governance/policies/security-group/
> certs/policy/
>


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Hubert Kario

unread,
Nov 24, 2017, 7:04:25 AM11/24/17
to dev-secur...@lists.mozilla.org, Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Thursday, 23 November 2017 20:22:28 CET Jakob Bohm via dev-security-policy
wrote:
> On 23/11/2017 13:07, Hubert Kario wrote:
> > In response to comment made by Gervase Markham[1], pointing out that
> > Mozilla doesn't have an official RSA-PSS usage policy.
> >
> > This is the thread to discuss it and make a proposal that could be later
> > included in Mozilla Root Store Policy[2]
> >
> > I'm proposing the following additions to the Policy (leaving out exactly
> > which sections this needs to be added, as that's better left for the end
> > of>
> > discussion):
> > - RSA keys can be used to make RSASSA-PKCS#1 v1.5 or RSASSA-PSS
> > signatures on>
> > issued certificates
>
> I presume this refers to "CA RSA keys"

yes

> > - certificates containing RSA parameters can be limited to perform
> > RSASSA-PSS>
> > signatures only by specifying the X.509 Subject Public Key Info algorithm
> > identifier to RSA-PSS algorithm
>
> I presume that "RSA parameters" is not the same as "RSA public key".

yes

> For clarity, please specify what X.509 fields or extensions "RSA
> parameters" would be contained in. (Since other past or future
> extensions might contain information about other keys that should not be
> subject to this policy merely due to its wording).
>
> Also specify any RSA related OIDs numerically to resolve the ambiguity
> caused by the historic assignment of competing OIDs for RSA and hash+RSA
> combinations.

I think that referring to the RFCs that define those OIDs would be sufficient

> Also clarify (or refer to a specific existing standard) if and how such
> specification would restrict the hash algorithms that can be used with
> the subject public key by the certificate holder.

RFC 4055 and RFC 5756

> > - end-entity certificates must not include RSA-PSS parameters in the
> > Public
> >
> > Key Info Algorithm Identifier - that is, they must not be limited to
> > creating signatures with only one specific hash algorithm
>
> This might or might not need to change in the future, e.g. if attacks
> against RSASSA-PKCS#1.5 become more significant than e.g. quantum
> attacks on RSA public keys.

if RSA becomes breakable, the hash used for the signature will have no impact
on the ease of breaking RSA - the key size is important

> > - issuing certificates may include RSA-PSS parameters in the Public Key
> > Info>
> > Algorithm Identifier, it's recommended that the hash selected matches the
> > security of the key
>
> The relationship between hash strength and RSA strength is very much a
> mixture of current attack speeds, estimates and conflicting opinions.
> Especially for CA certificates, it would probably be best practice to
> use the largest sizes compatible with the systems that will use and/or
> trust the certificates (not to be confused with the systems relying on
> any one root store).

that's why I didn't specify any particular sizes - but if you have a CA that
has 4096 bit key and signs with SHA256 and a sub CA that has 2048 bit key, it
shouldn't sign certificates with SHA384.

that being said, a hierarchy that uses different key sizes, but all signatures
are signed with SHA256 is also perfectly valid

> > - signature hash and the hash used for mask generation must be the same
> > both>
> > in public key parameters in certificate and in signature parameters
>
> Please make sure that cross-signing between hash algorithms that are
> trusted at any given time are allowed. For example if an included root
> certificate is RSA-PSS with a declared restriction to use only SHA-256,
> it should still be able to cross sign an included or non-included
> SHA-384 root.

that was the intention - the tuple (signatureHash, mgf1Hash) is what's
supposed to have same value for first and second element, and that requirement
is supposed to hold for both public key info and signature. Thanks for
pointing out that this can be ambiguous.

> > - the salt length must equal at least 32 for SHA-256, 48 for SHA-384 and
> > 64
> >
> > bytes for SHA-512
>
> Rephrase as "the salt length must equal the output length of the hash
> algorithm, e.g. 48 bytes for SHA-384". This would also cover SHA3 and
> other "future" algorithms.

SHA-3 will require changes to the policy either way...
signature.asc

Hubert Kario

unread,
Nov 24, 2017, 7:04:25 AM11/24/17
to dev-secur...@lists.mozilla.org, Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Thursday, 23 November 2017 20:22:28 CET Jakob Bohm via dev-security-policy
wrote:
> On 23/11/2017 13:07, Hubert Kario wrote:
> > In response to comment made by Gervase Markham[1], pointing out that
> > Mozilla doesn't have an official RSA-PSS usage policy.
> >
> > This is the thread to discuss it and make a proposal that could be later
> > included in Mozilla Root Store Policy[2]
> >
> > I'm proposing the following additions to the Policy (leaving out exactly
> > which sections this needs to be added, as that's better left for the end
> > of>
> > discussion):
> > - RSA keys can be used to make RSASSA-PKCS#1 v1.5 or RSASSA-PSS
> > signatures on>
> > issued certificates
>
> I presume this refers to "CA RSA keys"

yes

> > - certificates containing RSA parameters can be limited to perform
> > RSASSA-PSS>
> > signatures only by specifying the X.509 Subject Public Key Info algorithm
> > identifier to RSA-PSS algorithm
>
> I presume that "RSA parameters" is not the same as "RSA public key".

yes

> For clarity, please specify what X.509 fields or extensions "RSA
> parameters" would be contained in. (Since other past or future
> extensions might contain information about other keys that should not be
> subject to this policy merely due to its wording).
>
> Also specify any RSA related OIDs numerically to resolve the ambiguity
> caused by the historic assignment of competing OIDs for RSA and hash+RSA
> combinations.

I think that referring to the RFCs that define those OIDs would be sufficient

> Also clarify (or refer to a specific existing standard) if and how such
> specification would restrict the hash algorithms that can be used with
> the subject public key by the certificate holder.

RFC 4055 and RFC 5756

> > - end-entity certificates must not include RSA-PSS parameters in the
> > Public
> >
> > Key Info Algorithm Identifier - that is, they must not be limited to
> > creating signatures with only one specific hash algorithm
>
> This might or might not need to change in the future, e.g. if attacks
> against RSASSA-PKCS#1.5 become more significant than e.g. quantum
> attacks on RSA public keys.

if RSA becomes breakable, the hash used for the signature will have no impact
on the ease of breaking RSA - the key size is important

> > - issuing certificates may include RSA-PSS parameters in the Public Key
> > Info>
> > Algorithm Identifier, it's recommended that the hash selected matches the
> > security of the key
>
> The relationship between hash strength and RSA strength is very much a
> mixture of current attack speeds, estimates and conflicting opinions.
> Especially for CA certificates, it would probably be best practice to
> use the largest sizes compatible with the systems that will use and/or
> trust the certificates (not to be confused with the systems relying on
> any one root store).

that's why I didn't specify any particular sizes - but if you have a CA that
has 4096 bit key and signs with SHA256 and a sub CA that has 2048 bit key, it
shouldn't sign certificates with SHA384.

that being said, a hierarchy that uses different key sizes, but all signatures
are signed with SHA256 is also perfectly valid

> > - signature hash and the hash used for mask generation must be the same
> > both>
> > in public key parameters in certificate and in signature parameters
>
> Please make sure that cross-signing between hash algorithms that are
> trusted at any given time are allowed. For example if an included root
> certificate is RSA-PSS with a declared restriction to use only SHA-256,
> it should still be able to cross sign an included or non-included
> SHA-384 root.

that was the intention - the tuple (signatureHash, mgf1Hash) is what's
supposed to have same value for first and second element, and that requirement
is supposed to hold for both public key info and signature. Thanks for
pointing out that this can be ambiguous.

> > - the salt length must equal at least 32 for SHA-256, 48 for SHA-384 and
> > 64
> >
> > bytes for SHA-512
>
> Rephrase as "the salt length must equal the output length of the hash
> algorithm, e.g. 48 bytes for SHA-384". This would also cover SHA3 and
> other "future" algorithms.

SHA-3 will require changes to the policy either way...

signature.asc

Ryan Sleevi

unread,
Nov 27, 2017, 11:28:45 AM11/27/17
to Hubert Kario, dev-secur...@lists.mozilla.org
On Thu, Nov 23, 2017 at 7:07 AM, Hubert Kario via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> In response to comment made by Gervase Markham[1], pointing out that
> Mozilla
> doesn't have an official RSA-PSS usage policy.
>
> This is the thread to discuss it and make a proposal that could be later
> included in Mozilla Root Store Policy[2]
>
> I'm proposing the following additions to the Policy (leaving out exactly
> which
> sections this needs to be added, as that's better left for the end of
> discussion):
>
> - RSA keys can be used to make RSASSA-PKCS#1 v1.5 or RSASSA-PSS
> signatures on
> issued certificates
> - certificates containing RSA parameters can be limited to perform
> RSASSA-PSS
> signatures only by specifying the X.509 Subject Public Key Info algorithm
> identifier to RSA-PSS algorithm
> - end-entity certificates must not include RSA-PSS parameters in the
> Public
> Key Info Algorithm Identifier - that is, they must not be limited to
> creating
> signatures with only one specific hash algorithm
> - issuing certificates may include RSA-PSS parameters in the Public Key
> Info
> Algorithm Identifier, it's recommended that the hash selected matches the
> security of the key
> - signature hash and the hash used for mask generation must be the same
> both
> in public key parameters in certificate and in signature parameters
> - the salt length must equal at least 32 for SHA-256, 48 for SHA-384 and
> 64
> bytes for SHA-512
> - SHA-1 and SHA-224 are not acceptable for use with RSA-PSS algorithm
>
> 1 - https://bugzilla.mozilla.org/show_bug.cgi?id=1400844#c15
> 2 - https://www.mozilla.org/en-US/about/governance/policies/
> security-group/
> certs/policy/


Hubert,

Thanks for raising this issue in m.d.s.p.

I think it's helpful to break the discussion into two (or more) parts.

One part worth discussing is the CA policy - that is, what are CAs expected
to do or not do, and what constitutes "misissuance". Another part worth
discussing is client behaviour - what will NSS (and Mozilla) clients
support and not support, despite it not being misissuance, so that there's
a clear understanding about what's supported.

The reason I make these distinction is that the relevant RFCs - 4055 and
5766 - are bad RFCs. While well-intentioned, they were written at the
height of the obsession to 'parameterize all the things' to ensure 'future
compatibility' - but the consequence of this is that they introduced a
tremendous amount of complexity, while also mistaking risk mitigation for
policy advice. Because these RFCs confuse and conflate these two issues,
while also introducing significant area for mistakes, we need to be very
careful and very precise.

On the realm of CA policy, we're discussing two matters:
1) What should the certificates a CA issue be encoded as
2) How should the CA protect and use its private key.

While it may not be immediately obvious, both your proposal and 4055
attempt to treat #2 by #1, but they're actually separate issues. This
mistake is being made by treating PSS-params on CA certificates as an
important signal for reducing cross-protocol attacks, but it doesn't. This
is because the same public/private key pair can be associated with multiple
certificates, with multiple params encodings (and potentially the same
subject), and clients that enforced the silly 4055 restrictions would
happily accept these.

So I think it's useful to instead work from a clean set of principles, and
try to express them:

1) The assumption, although the literature doesn't suggest it's necessary,
and it's not presently enforced in the existing WebPKI, is that the hash
algorithm for both PKCS#1 v1.5 and RSA-PSS should be limited to a single
hash algorithm for the private key.
a) One way to achieve this is via policy - to state that all signatures
produced by a CA with a given private key must use the same set of
parameters
b) Another way is to try and achieve this via encoding (as 4055
attempts), but as I noted, this is entirely toothless (and somewhat
incorrectly presumes X.500's DIT as the mechanism of enforcing policy a)

2) We want to ensure there is a bounded, unambiguous set of accepted
encodings for what a CA directly controls
a) The "signature" fields of TBSCertificate (Certs) and TBSCertList
(CRL). OCSP does not duplicate the signature algorithm in the ResponseData
of a BasicOCSPResponse, so it's not necessary
b) The "subjectPublicKeyInfo" of a TBSCertificate

3) We want to make sure to set expectations around what is supported in the
signatureAlgorithm fields of a Certificate (certs), CertificateList (CRLs),
and BasicOCSPResponse (OCSP).
- Notably, these fields are mutable by attackers as they're part of the
'unsigned' portion of the certificate, so we must be careful here about the
flexibility

4) We want to define what the behaviour will be for NSS (and Mozilla)
clients if/when these constraints are violated
- Notably, is the presence of something awry a sign of a bad
certification path (which can be recovered by trying other paths) or is it
a sign of bad CA action (in which case, it should be signalled as an error
and non-functioning)


Within the IETF, the TLS WG has, in effect, rejected much of the complexity
of 4055 and 5766 by reserving specific algorithm IDs to indicate the
constrained set of PSS parameters to a sensible interoperable portion.
rsa_pss_sha256 means hashAlg = sha256, mgf = mgf1, mgf-hash-alg = sha256,
saltLength = size-of-sha256-digest (32), and doesn't need to reference the
trailer bit.

If we were to apply those same, common sense principles to public
certificates, we would realize that 4055 and 5766 are needlessly complex,
and instead use simple OIDs (with no parameters) to indicate an equivalent
set of permutations.

However, if we chose to avoid simplicitcy and pursue complexity, then I
think we'd want to treat this as:

1) A policy restriction that a CA MUST NOT use a private key that has been
used for one algorithm to be used with another (no mixing PKCS#1 v1.5 and
RSA-PSS)
2) Optionally, a policy restriction that a CA MUST NOT use a private key
with one set of RSA-PSS params to issue signatures with another set of
RSA-PSS params
3) Optionally, a policy restriction that a CA MUST NOT use a private key
with one RSA-PKCS#1v1.5 hash algorithm to issue signatures with another
RSA-PKCS#1v1.5 hash algorithm

I say "optionally", because a substantial number of the CAs already do and
have done #3, and was critically necessary, for example, for the transition
from SHA-1 to SHA-256 - which is why I think #2 is silly and unnecessary.

In addition
4) A policy restriction that a CA MUST NOT issue a certificate with one set
of RSA-PSS params if a certificate has been issued for that public key with
another set of RSA-PSS params
a) In the case where it's the same CA/organization issuing both
certificates, this is to prevent them from creating multiple certs
b) In the case of 'hostile' misissuance or incompetent cross-signing, it
does raise the question of "Which was issued was first" (e.g. if CA-A
issues a cert with RSA-PSS-SHA256, and CA-B issues a cert for that same key
with RSA-PSS-SHA384, who misissued?) - but I think we can resolve that by
"whichever was disclosed in CCADB later is the misissued one"
5) A policy requirement that CAs MUST encode the signature field of
TBSCertificate and TBSCertList in an unambiguous form (the policy would
provide the exact bytes of the DER encoded structure).
- This is necessary because despite PKCS#1v1.5 also having specified how
the parameters were encoded, CAs still screwed this up
6) A policy requirement that CAs MUST encode the subjectPublicKeyInfo field
of TBSCertificate in an unambiguous form (the policy would provide the
exact bytes of the DER-encoded structure)
7) Changes to NSS to ensure it did NOT attempt to DER-decode the structure
(especially given NSS's liberal acceptance of invalid DER-like BER), but
instead did a byte-for-byte comparison - much like mozilla::pkix does for
PKCS#1v1.5 (thus avoiding past CVEs in NSS)


If this is adopted, it still raises the question of whether 'past' RSA-PSS
issuances are misissued - whether improperly DER-like BER encoded or mixed
hash algorithms or mixed parameter encodings - but this is somewhat an
intrinsic result of not carefully specifying the algorithms and not having
implementations be appropriately strict.

Hubert Kario

unread,
Nov 27, 2017, 12:55:24 PM11/27/17
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
but the latter must be a superset of the former. And it also describes the
part that NSS will explicitly support (and thus must test)

> The reason I make these distinction is that the relevant RFCs - 4055 and
> 5766 - are bad RFCs. While well-intentioned, they were written at the
> height of the obsession to 'parameterize all the things' to ensure 'future
> compatibility' - but the consequence of this is that they introduced a
> tremendous amount of complexity, while also mistaking risk mitigation for
> policy advice. Because these RFCs confuse and conflate these two issues,
> while also introducing significant area for mistakes, we need to be very
> careful and very precise.
>
> On the realm of CA policy, we're discussing two matters:
> 1) What should the certificates a CA issue be encoded as
> 2) How should the CA protect and use its private key.
>
> While it may not be immediately obvious, both your proposal and 4055
> attempt to treat #2 by #1, but they're actually separate issues. This
> mistake is being made by treating PSS-params on CA certificates as an
> important signal for reducing cross-protocol attacks, but it doesn't. This
> is because the same public/private key pair can be associated with multiple
> certificates, with multiple params encodings (and potentially the same
> subject), and clients that enforced the silly 4055 restrictions would
> happily accept these.

the CA can also use sexy primes as the private key, making the private key
easy to derive from the modulus... We can't list every possible way you can
overturn the intention of the RFCs.

we need to assume well-meaning actors, at least to a certain degree

> So I think it's useful to instead work from a clean set of principles, and
> try to express them:
>
> 1) The assumption, although the literature doesn't suggest it's necessary,
> and it's not presently enforced in the existing WebPKI, is that the hash
> algorithm for both PKCS#1 v1.5 and RSA-PSS should be limited to a single
> hash algorithm for the private key.
> a) One way to achieve this is via policy - to state that all signatures
> produced by a CA with a given private key must use the same set of
> parameters
> b) Another way is to try and achieve this via encoding (as 4055
> attempts), but as I noted, this is entirely toothless (and somewhat
> incorrectly presumes X.500's DIT as the mechanism of enforcing policy a)

just because the mechanism can be abused, doesn't make it useless for people
that want to use it correctly. It still will protect people that use it
correctly.

> 2) We want to ensure there is a bounded, unambiguous set of accepted
> encodings for what a CA directly controls
> a) The "signature" fields of TBSCertificate (Certs) and TBSCertList
> (CRL). OCSP does not duplicate the signature algorithm in the ResponseData
> of a BasicOCSPResponse, so it's not necessary

that's already a MUST requirement, isn't it?

> b) The "subjectPublicKeyInfo" of a TBSCertificate

that's the biggest issue

> 3) We want to make sure to set expectations around what is supported in the
> signatureAlgorithm fields of a Certificate (certs), CertificateList (CRLs),
> and BasicOCSPResponse (OCSP).
> - Notably, these fields are mutable by attackers as they're part of the
> 'unsigned' portion of the certificate, so we must be careful here about the
> flexibility

true, but a). there's no chance that a valid PKCS#1 v1.5 signature will be
accepted as an RSA-PSS signature or vice versa, b). I'm proposing addition of
only 3 valid encodings, modulo salt size

> 4) We want to define what the behaviour will be for NSS (and Mozilla)
> clients if/when these constraints are violated
> - Notably, is the presence of something awry a sign of a bad
> certification path (which can be recovered by trying other paths) or is it
> a sign of bad CA action (in which case, it should be signalled as an error
> and non-functioning)

it's an invalid signature, needs to be treated as that

> Within the IETF, the TLS WG has, in effect, rejected much of the complexity
> of 4055 and 5766 by reserving specific algorithm IDs to indicate the
> constrained set of PSS parameters to a sensible interoperable portion.
> rsa_pss_sha256 means hashAlg = sha256, mgf = mgf1, mgf-hash-alg = sha256,
> saltLength = size-of-sha256-digest (32), and doesn't need to reference the
> trailer bit.
>
> If we were to apply those same, common sense principles to public
> certificates, we would realize that 4055 and 5766 are needlessly complex,
> and instead use simple OIDs (with no parameters) to indicate an equivalent
> set of permutations.

problem with that solution is that it is not backwards compatible, and given
that support for RSA-PSS signatures in the wild dates back to OpenSSL 1.0.1
and Windows 7 (IIRC) - not building on top of it would delay public use of
RSA-PSS for at least another half a decade

> However, if we chose to avoid simplicitcy and pursue complexity, then I
> think we'd want to treat this as:
>
> 1) A policy restriction that a CA MUST NOT use a private key that has been
> used for one algorithm to be used with another (no mixing PKCS#1 v1.5 and
> RSA-PSS)
> 2) Optionally, a policy restriction that a CA MUST NOT use a private key
> with one set of RSA-PSS params to issue signatures with another set of
> RSA-PSS params
> 3) Optionally, a policy restriction that a CA MUST NOT use a private key
> with one RSA-PKCS#1v1.5 hash algorithm to issue signatures with another
> RSA-PKCS#1v1.5 hash algorithm
>
> I say "optionally", because a substantial number of the CAs already do and
> have done #3, and was critically necessary, for example, for the transition
> from SHA-1 to SHA-256 - which is why I think #2 is silly and unnecessary.

I don't consider allowing for encoding such restrictions hugely important
either, but I don't see a reason to forbid CAs from doing that to CA
certificates either, if they decide that they want to do that

> In addition
> 4) A policy restriction that a CA MUST NOT issue a certificate with one set
> of RSA-PSS params if a certificate has been issued for that public key with
> another set of RSA-PSS params
> a) In the case where it's the same CA/organization issuing both
> certificates, this is to prevent them from creating multiple certs
> b) In the case of 'hostile' misissuance or incompetent cross-signing, it
> does raise the question of "Which was issued was first" (e.g. if CA-A
> issues a cert with RSA-PSS-SHA256, and CA-B issues a cert for that same key
> with RSA-PSS-SHA384, who misissued?) - but I think we can resolve that by
> "whichever was disclosed in CCADB later is the misissued one"

+1

> 5) A policy requirement that CAs MUST encode the signature field of
> TBSCertificate and TBSCertList in an unambiguous form (the policy would
> provide the exact bytes of the DER encoded structure).
> - This is necessary because despite PKCS#1v1.5 also having specified how
> the parameters were encoded, CAs still screwed this up

that was because NULL versus empty was ambiguous - that's not the case for
RSA-PSS - empty params means SHA-1 and SHA-1 is forbidden, missing params is
unbounded so there's nothing to fail interop

> 6) A policy requirement that CAs MUST encode the subjectPublicKeyInfo field
> of TBSCertificate in an unambiguous form (the policy would provide the
> exact bytes of the DER-encoded structure)
> 7) Changes to NSS to ensure it did NOT attempt to DER-decode the structure
> (especially given NSS's liberal acceptance of invalid DER-like BER), but
> instead did a byte-for-byte comparison - much like mozilla::pkix does for
> PKCS#1v1.5 (thus avoiding past CVEs in NSS)

that would require hardcoding salt lengths, given their meaning in
subjectPublicKeyInfo, I wouldn't be too happy about it

looking at OpenSSL behaviour, it would likely render all past signatures
invalid and making signatures with already released software unnecessarily
complex (OpenSSL defaults to as large salt as possible)

> If this is adopted, it still raises the question of whether 'past' RSA-PSS
> issuances are misissued - whether improperly DER-like BER encoded or mixed
> hash algorithms or mixed parameter encodings - but this is somewhat an
> intrinsic result of not carefully specifying the algorithms and not having
> implementations be appropriately strict.

for X.509 only DER is allowed, if the tags or values are not encoded with
minimal number of bytes necessary, or with indeterminate length, it's not DER
it's BER and that's strictly forbidden
signature.asc

Ryan Sleevi

unread,
Nov 27, 2017, 2:32:37 PM11/27/17
to Hubert Kario, Ryan Sleevi, dev-secur...@lists.mozilla.org
On Mon, Nov 27, 2017 at 12:54 PM, Hubert Kario <hka...@redhat.com> wrote:
>
> > On the realm of CA policy, we're discussing two matters:
> > 1) What should the certificates a CA issue be encoded as
> > 2) How should the CA protect and use its private key.
> >
> > While it may not be immediately obvious, both your proposal and 4055
> > attempt to treat #2 by #1, but they're actually separate issues. This
> > mistake is being made by treating PSS-params on CA certificates as an
> > important signal for reducing cross-protocol attacks, but it doesn't.
> This
> > is because the same public/private key pair can be associated with
> multiple
> > certificates, with multiple params encodings (and potentially the same
> > subject), and clients that enforced the silly 4055 restrictions would
> > happily accept these.
>
> the CA can also use sexy primes as the private key, making the private key
> easy to derive from the modulus... We can't list every possible way you can
> overturn the intention of the RFCs.
>
> we need to assume well-meaning actors, at least to a certain degree
>

First, I absolutely disagree with your assumption - we need to assume
hostility, and design our code and policies to be robust against that. I
should hope that was uncontroversial, but it doesn't seem to be.

Second, the only reason this is an issue was your suggestion (derived from
4055, to be fair) about restricting the params<->signature interaction. The
flexibility afforded by 4055 in expressing the parameters, and then
subsequently constraining the validation rules, is not actually met by the
threat model.

That is, if it's dangerous to mix the hash algorithms in PSS signatures
(and I'm not aware of literature suggesting this is necessary, versus being
speculative concern), then we should explicitly prohibit it via policy.
Requiring the parameters in the certificates does not, in any way, mitigate
this risk - and its presumptive inclusion in 4055 was to constrain how
signature-creating-software behaved, rather than how
signature-accepting-clients should behave.

Alternatively, if mixing the hash algorithms is not fundamentally unsafe in
the case of RSA-PSS, then it's unnecessary and overly complicating things
to include the params in the SPKI of the CA's certificate. The fact that
'rsaEncryption' needs to be accepted as valid for the issuance of RSA-PSS
signatures already implies it's acceptable, and so the whole SHOULD
construct is imposing on the ecosystem an unsupported policy.

So no, we should not assume well-meaning actors, and we should be explicit
about what the "intention" of the RFCs is, and whether they actually
achieve that.


> > So I think it's useful to instead work from a clean set of principles,
> and
> > try to express them:
> >
> > 1) The assumption, although the literature doesn't suggest it's
> necessary,
> > and it's not presently enforced in the existing WebPKI, is that the hash
> > algorithm for both PKCS#1 v1.5 and RSA-PSS should be limited to a single
> > hash algorithm for the private key.
> > a) One way to achieve this is via policy - to state that all signatures
> > produced by a CA with a given private key must use the same set of
> > parameters
> > b) Another way is to try and achieve this via encoding (as 4055
> > attempts), but as I noted, this is entirely toothless (and somewhat
> > incorrectly presumes X.500's DIT as the mechanism of enforcing policy a)
>
> just because the mechanism can be abused, doesn't make it useless for
> people
> that want to use it correctly. It still will protect people that use it
> correctly.
>

B is absolutely useless as a security mechanism against threats, and is
instead a way of signature-producing software to bake in an API contract in
to an RFC. We shouldn't encourage that, nor should the ecosystem have to
bear that complexity.

If it's not a security mechanism, then it's unnecessary.


> > 2) We want to ensure there is a bounded, unambiguous set of accepted
> > encodings for what a CA directly controls
> > a) The "signature" fields of TBSCertificate (Certs) and TBSCertList
> > (CRL). OCSP does not duplicate the signature algorithm in the
> ResponseData
> > of a BasicOCSPResponse, so it's not necessary
>
> that's already a MUST requirement, isn't it?
>

It's not what NSS has implemented, but shipping, as captured on the bug.

And this matters, because permissive bugs in client implementation
absolutely leads to widespread ossification of server bugs, and why I
specifically requested that the NSS developers unship RSA-PSS support until
they can correctly and properly implement it.

We already saw this with RSA-PKCS#1v1.5 - it shouldn't be repeated again.


>
> > b) The "subjectPublicKeyInfo" of a TBSCertificate
>
> that's the biggest issue
>
> > 3) We want to make sure to set expectations around what is supported in
> the
> > signatureAlgorithm fields of a Certificate (certs), CertificateList
> (CRLs),
> > and BasicOCSPResponse (OCSP).
> > - Notably, these fields are mutable by attackers as they're part of the
> > 'unsigned' portion of the certificate, so we must be careful here about
> the
> > flexibility
>
> true, but a). there's no chance that a valid PKCS#1 v1.5 signature will be
> accepted as an RSA-PSS signature or vice versa, b). I'm proposing addition
> of
> only 3 valid encodings, modulo salt size
>

IMO, a) is not relevant to set of concerns, which I echo'd on the bug and
again above

And I'm suggesting that while you're proposing prosaically three valid
encodings, this community has ample demonstration that CAs have difficulty
correctly implementing things - in part, due to clients such as NSS
shipping Postel-liberal parsers - and so the policy should make it as
unambiguous as possible. The best way to make this unambiguous is to
provide the specific encodings - byte for byte.

Then a correct implementation can do a byte-for-byte evaluation of the
algorithm, without needing to parse at all - a net win.


> > 4) We want to define what the behaviour will be for NSS (and Mozilla)
> > clients if/when these constraints are violated
> > - Notably, is the presence of something awry a sign of a bad
> > certification path (which can be recovered by trying other paths) or is
> it
> > a sign of bad CA action (in which case, it should be signalled as an
> error
> > and non-functioning)
>
> it's an invalid signature, needs to be treated as that
>

I think my point still stands that 'invalid signature' can be treated as
either case I mentioned, and so your answer doesn't actually resolve the
matter.



> > However, if we chose to avoid simplicitcy and pursue complexity, then I
> > think we'd want to treat this as:
> >
> > 1) A policy restriction that a CA MUST NOT use a private key that has
> been
> > used for one algorithm to be used with another (no mixing PKCS#1 v1.5 and
> > RSA-PSS)
> > 2) Optionally, a policy restriction that a CA MUST NOT use a private key
> > with one set of RSA-PSS params to issue signatures with another set of
> > RSA-PSS params
> > 3) Optionally, a policy restriction that a CA MUST NOT use a private key
> > with one RSA-PKCS#1v1.5 hash algorithm to issue signatures with another
> > RSA-PKCS#1v1.5 hash algorithm
> >
> > I say "optionally", because a substantial number of the CAs already do
> and
> > have done #3, and was critically necessary, for example, for the
> transition
> > from SHA-1 to SHA-256 - which is why I think #2 is silly and unnecessary.
>
> I don't consider allowing for encoding such restrictions hugely important
> either, but I don't see a reason to forbid CAs from doing that to CA
> certificates either, if they decide that they want to do that
>

Why one and not the other? Personal preference? There's a lack of tight
proof either way as to the harm.


> > 5) A policy requirement that CAs MUST encode the signature field of
> > TBSCertificate and TBSCertList in an unambiguous form (the policy would
> > provide the exact bytes of the DER encoded structure).
> > - This is necessary because despite PKCS#1v1.5 also having specified
> how
> > the parameters were encoded, CAs still screwed this up
>
> that was because NULL versus empty was ambiguous - that's not the case for
> RSA-PSS - empty params means SHA-1 and SHA-1 is forbidden, missing params
> is
> unbounded so there's nothing to fail interop
>

I disagree with your assessment, again born out by the experience here on
the community.

I can easily see a CA mistaking "MGF is MGF1" leaning to encoding the
hashAlgorithm as SHA-1 and the MGF as id-mgf1 without realizing that params
also needs to be specified.

Consider, for example, that RFC 4055's rsaSSA-PSS-SHA256-Params,
SHA384-Params, and SHA512-Params all set saltLength as 20. The subtlety of
the policy requiring 32/48/64 rather than 20/20/20 is absolutely a mistake
a CA can make. For example, their software may say "PSS/SHA-256" and result
in 4055's PSS-SHA256-Params rather than the proposed requirement.


> 6) A policy requirement that CAs MUST encode the subjectPublicKeyInfo
> field
> > of TBSCertificate in an unambiguous form (the policy would provide the
> > exact bytes of the DER-encoded structure)
> > 7) Changes to NSS to ensure it did NOT attempt to DER-decode the
> structure
> > (especially given NSS's liberal acceptance of invalid DER-like BER), but
> > instead did a byte-for-byte comparison - much like mozilla::pkix does for
> > PKCS#1v1.5 (thus avoiding past CVEs in NSS)
>
> that would require hardcoding salt lengths, given their meaning in
> subjectPublicKeyInfo, I wouldn't be too happy about it
>
> looking at OpenSSL behaviour, it would likely render all past signatures
> invalid and making signatures with already released software unnecessarily
> complex (OpenSSL defaults to as large salt as possible)
>

That's OK, because none of these certificates are publicly trusted, and
there's zero reason for a client to support all of the ill-considered
flexibility of 4055.


> > If this is adopted, it still raises the question of whether 'past'
> RSA-PSS
> > issuances are misissued - whether improperly DER-like BER encoded or
> mixed
> > hash algorithms or mixed parameter encodings - but this is somewhat an
> > intrinsic result of not carefully specifying the algorithms and not
> having
> > implementations be appropriately strict.
>
> for X.509 only DER is allowed, if the tags or values are not encoded with
> minimal number of bytes necessary, or with indeterminate length, it's not
> DER
> it's BER and that's strictly forbidden


I appreciate your contribution, but I think it's not born out by the real
world. If it was, then
https://wiki.mozilla.org/SecurityEngineering/Removing_Compatibility_Workarounds_in_mozilla::pkix
wouldn't have been necessary.

"strictly forbidden, but not enforced by clients" is just another way of
saying "implicitly permitted and likely to ossify". I would like to avoid
that, since many of these issues mentioned were in part caused by
past-acceptance of DER-like BER by NSS.

Hubert Kario

unread,
Nov 27, 2017, 4:52:29 PM11/27/17
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
On Monday, 27 November 2017 20:31:53 CET Ryan Sleevi wrote:
> On Mon, Nov 27, 2017 at 12:54 PM, Hubert Kario <hka...@redhat.com> wrote:
> > > On the realm of CA policy, we're discussing two matters:
> > > 1) What should the certificates a CA issue be encoded as
> > > 2) How should the CA protect and use its private key.
> > >
> > > While it may not be immediately obvious, both your proposal and 4055
> > > attempt to treat #2 by #1, but they're actually separate issues. This
> > > mistake is being made by treating PSS-params on CA certificates as an
> > > important signal for reducing cross-protocol attacks, but it doesn't.
> >
> > This
> >
> > > is because the same public/private key pair can be associated with
> >
> > multiple
> >
> > > certificates, with multiple params encodings (and potentially the same
> > > subject), and clients that enforced the silly 4055 restrictions would
> > > happily accept these.
> >
> > the CA can also use sexy primes as the private key, making the private key
> > easy to derive from the modulus... We can't list every possible way you
> > can
> > overturn the intention of the RFCs.
> >
> > we need to assume well-meaning actors, at least to a certain degree
>
> First, I absolutely disagree with your assumption - we need to assume
> hostility, and design our code and policies to be robust against that. I
> should hope that was uncontroversial, but it doesn't seem to be.

my point was that there are some actions of the other party we interact with
that must be taken on faith, like that the server_random is actually random,
or the server private part of the key share stays secret, or a myriad other
things that we cannot verify legitimacy or correctness of. Without assuming
that good faith it's impossible to communicate. Same for generated keys
provided to CAs.

> Second, the only reason this is an issue was your suggestion (derived from
> 4055, to be fair) about restricting the params<->signature interaction. The
> flexibility afforded by 4055 in expressing the parameters, and then
> subsequently constraining the validation rules, is not actually met by the
> threat model.

There was a threat model in RFC 4055?

Is it so hard to imagine that a CA may want to ensure that the signatures it
makes will never use weak hash? And even if it makes them, they won't be
validated by a conforming implementation?

> That is, if it's dangerous to mix the hash algorithms in PSS signatures
> (and I'm not aware of literature suggesting this is necessary, versus being
> speculative concern), then we should explicitly prohibit it via policy.
> Requiring the parameters in the certificates does not, in any way, mitigate
> this risk - and its presumptive inclusion in 4055 was to constrain how
> signature-creating-software behaved, rather than how
> signature-accepting-clients should behave.
>
> Alternatively, if mixing the hash algorithms is not fundamentally unsafe in
> the case of RSA-PSS, then it's unnecessary and overly complicating things
> to include the params in the SPKI of the CA's certificate. The fact that
> 'rsaEncryption' needs to be accepted as valid for the issuance of RSA-PSS
> signatures already implies it's acceptable, and so the whole SHOULD
> construct is imposing on the ecosystem an unsupported policy.

it would be nice if the world was black and white, it would make any kind of
nuanced work so much easier...

Those are all shades of grey, for some uses allowing possibility of cross-
protocol attacks is not important, for other, not so much.

> So no, we should not assume well-meaning actors, and we should be explicit
> about what the "intention" of the RFCs is, and whether they actually
> achieve that.

but we should achieve that by saying "do this", not "don't do this",
enumerating badness doesn't work - ask firewall people if you don't believe
me.

Or did we add to policy that keys revoked because they may haven been
compromised (heartbleed) can't be reused? Ever? Even by a different CA?

> > > So I think it's useful to instead work from a clean set of principles,
> >
> > and
> >
> > > try to express them:
> > >
> > > 1) The assumption, although the literature doesn't suggest it's
> >
> > necessary,
> >
> > > and it's not presently enforced in the existing WebPKI, is that the hash
> > > algorithm for both PKCS#1 v1.5 and RSA-PSS should be limited to a single
> > > hash algorithm for the private key.
> > >
> > > a) One way to achieve this is via policy - to state that all
> > > signatures
> > >
> > > produced by a CA with a given private key must use the same set of
> > > parameters
> > >
> > > b) Another way is to try and achieve this via encoding (as 4055
> > >
> > > attempts), but as I noted, this is entirely toothless (and somewhat
> > > incorrectly presumes X.500's DIT as the mechanism of enforcing policy a)
> >
> > just because the mechanism can be abused, doesn't make it useless for
> > people
> > that want to use it correctly. It still will protect people that use it
> > correctly.
>
> B is absolutely useless as a security mechanism against threats, and is
> instead a way of signature-producing software to bake in an API contract in
> to an RFC. We shouldn't encourage that, nor should the ecosystem have to
> bear that complexity.
>
> If it's not a security mechanism, then it's unnecessary.

how is stating "I will never use SHA-1" not a security mechanism?

> > > 2) We want to ensure there is a bounded, unambiguous set of accepted
> > > encodings for what a CA directly controls
> > >
> > > a) The "signature" fields of TBSCertificate (Certs) and TBSCertList
> > >
> > > (CRL). OCSP does not duplicate the signature algorithm in the
> >
> > ResponseData
> >
> > > of a BasicOCSPResponse, so it's not necessary
> >
> > that's already a MUST requirement, isn't it?
>
> It's not what NSS has implemented, but shipping, as captured on the bug.
>
> And this matters, because permissive bugs in client implementation
> absolutely leads to widespread ossification of server bugs, and why I
> specifically requested that the NSS developers unship RSA-PSS support until
> they can correctly and properly implement it.
>
> We already saw this with RSA-PKCS#1v1.5 - it shouldn't be repeated again.

it turned out to be a problem because a). it was the 90's, b). everybody did
it like that, c). everybody assumed (not test) that security software was
written, well, securely...

I don't think either of those things remain true

> > > b) The "subjectPublicKeyInfo" of a TBSCertificate
> >
> > that's the biggest issue
> >
> > > 3) We want to make sure to set expectations around what is supported in
> >
> > the
> >
> > > signatureAlgorithm fields of a Certificate (certs), CertificateList
> >
> > (CRLs),
> >
> > > and BasicOCSPResponse (OCSP).
> > >
> > > - Notably, these fields are mutable by attackers as they're part of
> > > the
> > >
> > > 'unsigned' portion of the certificate, so we must be careful here about
> >
> > the
> >
> > > flexibility
> >
> > true, but a). there's no chance that a valid PKCS#1 v1.5 signature will be
> > accepted as an RSA-PSS signature or vice versa, b). I'm proposing addition
> > of
> > only 3 valid encodings, modulo salt size
>
> IMO, a) is not relevant to set of concerns, which I echo'd on the bug and
> again above
>
> And I'm suggesting that while you're proposing prosaically three valid
> encodings, this community has ample demonstration that CAs have difficulty
> correctly implementing things - in part, due to clients such as NSS
> shipping Postel-liberal parsers - and so the policy should make it as
> unambiguous as possible. The best way to make this unambiguous is to
> provide the specific encodings - byte for byte.

or provide examples of specific encodings with explanations what can change
and to what degree...

> Then a correct implementation can do a byte-for-byte evaluation of the
> algorithm, without needing to parse at all - a net win.

that horse left the barn long time ago

> > > 4) We want to define what the behaviour will be for NSS (and Mozilla)
> > > clients if/when these constraints are violated
> > >
> > > - Notably, is the presence of something awry a sign of a bad
> > >
> > > certification path (which can be recovered by trying other paths) or is
> >
> > it
> >
> > > a sign of bad CA action (in which case, it should be signalled as an
> >
> > error
> >
> > > and non-functioning)
> >
> > it's an invalid signature, needs to be treated as that
>
> I think my point still stands that 'invalid signature' can be treated as
> either case I mentioned, and so your answer doesn't actually resolve the
> matter.

it means it should be treated the exact same way other invalid signatures are
treated by NSS

> > > However, if we chose to avoid simplicitcy and pursue complexity, then I
> > > think we'd want to treat this as:
> > >
> > > 1) A policy restriction that a CA MUST NOT use a private key that has
> >
> > been
> >
> > > used for one algorithm to be used with another (no mixing PKCS#1 v1.5
> > > and
> > > RSA-PSS)
> > > 2) Optionally, a policy restriction that a CA MUST NOT use a private key
> > > with one set of RSA-PSS params to issue signatures with another set of
> > > RSA-PSS params
> > > 3) Optionally, a policy restriction that a CA MUST NOT use a private key
> > > with one RSA-PKCS#1v1.5 hash algorithm to issue signatures with another
> > > RSA-PKCS#1v1.5 hash algorithm
> > >
> > > I say "optionally", because a substantial number of the CAs already do
> >
> > and
> >
> > > have done #3, and was critically necessary, for example, for the
> >
> > transition
> >
> > > from SHA-1 to SHA-256 - which is why I think #2 is silly and
> > > unnecessary.
> >
> > I don't consider allowing for encoding such restrictions hugely important
> > either, but I don't see a reason to forbid CAs from doing that to CA
> > certificates either, if they decide that they want to do that
>
> Why one and not the other? Personal preference? There's a lack of tight
> proof either way as to the harm.

because while saying "I will ever use $HASH_NAME" is not really useful, saying
"I will never use $HASH_NAME" is useful, unfortunately we can only describe
the latter in the terms of the former

> > > 5) A policy requirement that CAs MUST encode the signature field of
> > > TBSCertificate and TBSCertList in an unambiguous form (the policy would
> > > provide the exact bytes of the DER encoded structure).
> > >
> > > - This is necessary because despite PKCS#1v1.5 also having specified
> >
> > how
> >
> > > the parameters were encoded, CAs still screwed this up
> >
> > that was because NULL versus empty was ambiguous - that's not the case for
> > RSA-PSS - empty params means SHA-1 and SHA-1 is forbidden, missing params
> > is
> > unbounded so there's nothing to fail interop
>
> I disagree with your assessment, again born out by the experience here on
> the community.
>
> I can easily see a CA mistaking "MGF is MGF1" leaning to encoding the
> hashAlgorithm as SHA-1 and the MGF as id-mgf1 without realizing that params
> also needs to be specified.

that's also the part I have no problem what so ever with specification
spelling it out in excruciating detail

> Consider, for example, that RFC 4055's rsaSSA-PSS-SHA256-Params,
> SHA384-Params, and SHA512-Params all set saltLength as 20. The subtlety of
> the policy requiring 32/48/64 rather than 20/20/20 is absolutely a mistake
> a CA can make. For example, their software may say "PSS/SHA-256" and result
> in 4055's PSS-SHA256-Params rather than the proposed requirement.

then describe what is allowed, update policy, inform CAs in CA communication
what is allowed and implement those limitations in software.

we are talking about months of time for any ossification to happen, it took
decades to PKCS#1 v1.5 problems to pop up and it still affected only single
digit percentages of certificates.

> > 6) A policy requirement that CAs MUST encode the subjectPublicKeyInfo
> > field
> >
> > > of TBSCertificate in an unambiguous form (the policy would provide the
> > > exact bytes of the DER-encoded structure)
> > > 7) Changes to NSS to ensure it did NOT attempt to DER-decode the
> >
> > structure
> >
> > > (especially given NSS's liberal acceptance of invalid DER-like BER), but
> > > instead did a byte-for-byte comparison - much like mozilla::pkix does
> > > for
> > > PKCS#1v1.5 (thus avoiding past CVEs in NSS)
> >
> > that would require hardcoding salt lengths, given their meaning in
> > subjectPublicKeyInfo, I wouldn't be too happy about it
> >
> > looking at OpenSSL behaviour, it would likely render all past signatures
> > invalid and making signatures with already released software unnecessarily
> > complex (OpenSSL defaults to as large salt as possible)
>
> That's OK, because none of these certificates are publicly trusted, and
> there's zero reason for a client to support all of the ill-considered
> flexibility of 4055.

that does not apply to software already released that can make those
signatures with valid, publicly trusted keys

also, just because those CAs are not trusted now, doesn't mean that they can't
become trusted in the future through cross-signing agreements

> > > If this is adopted, it still raises the question of whether 'past'
> >
> > RSA-PSS
> >
> > > issuances are misissued - whether improperly DER-like BER encoded or
> >
> > mixed
> >
> > > hash algorithms or mixed parameter encodings - but this is somewhat an
> > > intrinsic result of not carefully specifying the algorithms and not
> >
> > having
> >
> > > implementations be appropriately strict.
> >
> > for X.509 only DER is allowed, if the tags or values are not encoded with
> > minimal number of bytes necessary, or with indeterminate length, it's not
> > DER
> > it's BER and that's strictly forbidden
>
> I appreciate your contribution, but I think it's not born out by the real
> world. If it was, then
> https://wiki.mozilla.org/SecurityEngineering/Removing_Compatibility_Workarou
> nds_in_mozilla::pkix wouldn't have been necessary.
>
> "strictly forbidden, but not enforced by clients" is just another way of
> saying "implicitly permitted and likely to ossify". I would like to avoid
> that, since many of these issues mentioned were in part caused by
> past-acceptance of DER-like BER by NSS.

then let's make sure that examples of such malformed certificates exists so
that any library can be tested if they reject them. And as documentations to
CA's that spells out "don't do this in particular".
signature.asc

Ryan Sleevi

unread,
Nov 27, 2017, 5:38:43 PM11/27/17
to Hubert Kario, Ryan Sleevi, dev-secur...@lists.mozilla.org
On Mon, Nov 27, 2017 at 4:51 PM, Hubert Kario <hka...@redhat.com> wrote:
>
> > First, I absolutely disagree with your assumption - we need to assume
> > hostility, and design our code and policies to be robust against that. I
> > should hope that was uncontroversial, but it doesn't seem to be.
>
> my point was that there are some actions of the other party we interact
> with
> that must be taken on faith, like that the server_random is actually
> random,
> or the server private part of the key share stays secret, or a myriad other
> things that we cannot verify legitimacy or correctness of. Without assuming
> that good faith it's impossible to communicate. Same for generated keys
> provided to CAs.
>

With respect to CA behaviours, I still disagree.

I agree we must rely on server random to be random. Similarly, we 'rely' on
CA's random serials to be random - but we also make it clear that it's a
policy expectation.

In this same regard, if the intent is to limit a given key from being used
with multiple RSA-PSS hash algorithms, or from being used with RSA-PSS and
RSA-PKCS#1v1.5, then we should explicitly state that in policy. Implying
that (via the certificate limits) without requiring that is, as time has
repeatedly shown, a recipe for disaster, because it's completely
non-obvious as to why this might be (and, again, not supported by the
literature)

Alternatively, if it's not necessary to limit such usage, then we should
not be requiring parameters in the SPKI, because it's not necessary.
Further, for simplicitly and interoperability - and to prevent the
misguided belief it is necessary - then we should explicitly forbid
parameters.

There's no need to support both cases - we should MUST or MUST NOT, but
should not SHOULD.


> > Second, the only reason this is an issue was your suggestion (derived
> from
> > 4055, to be fair) about restricting the params<->signature interaction.
> The
> > flexibility afforded by 4055 in expressing the parameters, and then
> > subsequently constraining the validation rules, is not actually met by
> the
> > threat model.
>
> There was a threat model in RFC 4055?
>
> Is it so hard to imagine that a CA may want to ensure that the signatures
> it
> makes will never use weak hash? And even if it makes them, they won't be
> validated by a conforming implementation?
>

If the CA wishes to ensure that, then it doesn't need the certificate to
ensure that - as the certificate doesn't control how they use the key.
If the CA happens to do so anyways, the certificate doesn't prevent them
from being validated by a conforming implementation, because another
certificate may have been issued with the same subject name with a
different set of parameters.

That is, under an adversarial model, it makes no sense to hinge it on the
certificate's encoding of the parameters. If you're trying to prevent CA
misuse of the private key, then be explicit about the policy being that the
CA should not misuse said private key, and any use of the private key is a
violation. If any signatures are thus created, the fact that the CA has
generated such a signature is the issue, not whether or not clients accept
it - because if the CA can't protect their private key adequately (from
insider misconfiguration or external attack), then all bets are off.

This is about having a rational threat model for the required policy. What
you propose is not rational under either external CA adversarial threats or
internal misconfiguration.


>
> > That is, if it's dangerous to mix the hash algorithms in PSS signatures
> > (and I'm not aware of literature suggesting this is necessary, versus
> being
> > speculative concern), then we should explicitly prohibit it via policy.
> > Requiring the parameters in the certificates does not, in any way,
> mitigate
> > this risk - and its presumptive inclusion in 4055 was to constrain how
> > signature-creating-software behaved, rather than how
> > signature-accepting-clients should behave.
> >
> > Alternatively, if mixing the hash algorithms is not fundamentally unsafe
> in
> > the case of RSA-PSS, then it's unnecessary and overly complicating things
> > to include the params in the SPKI of the CA's certificate. The fact that
> > 'rsaEncryption' needs to be accepted as valid for the issuance of RSA-PSS
> > signatures already implies it's acceptable, and so the whole SHOULD
> > construct is imposing on the ecosystem an unsupported policy.
>
> it would be nice if the world was black and white, it would make any kind
> of
> nuanced work so much easier...
>
> Those are all shades of grey, for some uses allowing possibility of cross-
> protocol attacks is not important, for other, not so much.
>

We're talking about the Web PKI. It really is black or white here - there's
no need to introduce flexibility that can be misused (intentionally or not)
by CAs and add complexity to clients.

Either it's safe to do - and we should both explicitly allow it and prevent
unnecessary complexity that is RFC 4055's many knobs - or it's not safe to
do, in which case, none of 4055's knobs adequately capture that it's
unsafe, and the policies should be based around the key. It really is that
black or white :)


> > So no, we should not assume well-meaning actors, and we should be
> explicit
> > about what the "intention" of the RFCs is, and whether they actually
> > achieve that.
>
> but we should achieve that by saying "do this", not "don't do this",
> enumerating badness doesn't work - ask firewall people if you don't believe
> me.
>
> Or did we add to policy that keys revoked because they may haven been
> compromised (heartbleed) can't be reused? Ever? Even by a different CA?
>

You've completely misframed my proposal. I'm enumerating a specific
whitelist of what is permitted. Every other option, unless otherwise
permitted, is restricted. I'm even going to the level of proposing a
byte-for-byte comparison function such that there's not even a prosaic
whitelist - it's such that the policy is black and white and transcends
language barriers by expressing directly in the technology.

You're enumerating a blacklist - saying that all of the flexibility of 4055
is permitted (except for these specific combinations), but propose to
enforce neither of those through code or policy. These restrictions or
prohibitions are easily misunderstood, especially for non-native speakers,
and without justification - thus failing to meet the test for good policy.
I don't think this is intentional, but it's why I reframed your proposal
into first principles - if our goal is X, we should do this, if our goal is
Y, we should do this - and not leave it ambiguous.


> > B is absolutely useless as a security mechanism against threats, and is
> > instead a way of signature-producing software to bake in an API contract
> in
> > to an RFC. We shouldn't encourage that, nor should the ecosystem have to
> > bear that complexity.
> >
> > If it's not a security mechanism, then it's unnecessary.
>
> how is stating "I will never use SHA-1" not a security mechanism?
>

Stating "Don't trust SHA-1 signatures from me - but, oh, btw, I'll happily
create them" is a failure of security, much in the way that you're
seemingly worried about cross-protocol attacks. If the goal is to ensure
the private key doesn't create a weak signature, then let's say that is the
requirement - everything else is defense in depth. But if you're solely
relying on the certificate to constrain those, rather than the key
protection, then that's insufficient.


> > IMO, a) is not relevant to set of concerns, which I echo'd on the bug and
> > again above
> >
> > And I'm suggesting that while you're proposing prosaically three valid
> > encodings, this community has ample demonstration that CAs have
> difficulty
> > correctly implementing things - in part, due to clients such as NSS
> > shipping Postel-liberal parsers - and so the policy should make it as
> > unambiguous as possible. The best way to make this unambiguous is to
> > provide the specific encodings - byte for byte.
>
> or provide examples of specific encodings with explanations what can change
> and to what degree...
>

There's no need for that flexibility. None. Not in the Web PKI. Especially
if your concern is someone running a production CA based on OpenSSL's
default implementation.


> > Then a correct implementation can do a byte-for-byte evaluation of the
> > algorithm, without needing to parse at all - a net win.
>
> that horse left the barn long time ago
>

Hardly. OpenSSL can't parse RSA-PSS SPKIs until the (unreased) OpenSSL
1.1.1, and which this policy would require (for CA certificates). Look at
the Web PKI and you see virtually no such certificates exist.

The point of the policy is what is on publicly trusted CAs, so I don't
think it's relevant or productive to argue for hypothetical future-public
CAs or internal CAs.


> > Consider, for example, that RFC 4055's rsaSSA-PSS-SHA256-Params,
> > SHA384-Params, and SHA512-Params all set saltLength as 20. The subtlety
> of
> > the policy requiring 32/48/64 rather than 20/20/20 is absolutely a
> mistake
> > a CA can make. For example, their software may say "PSS/SHA-256" and
> result
> > in 4055's PSS-SHA256-Params rather than the proposed requirement.
>
> then describe what is allowed, update policy, inform CAs in CA
> communication
> what is allowed and implement those limitations in software.
>

And don't ship the software until those limitations are implemented, as
otherwise, the implementation is incomplete.


> we are talking about months of time for any ossification to happen, it took
> decades to PKCS#1 v1.5 problems to pop up and it still affected only single
> digit percentages of certificates.
>

The fact that it affected a single-digit percentage of certificates (and, I
want to note, you haven't provided any citation for that - I would greatly
welcome backing that up) is a HUGE number in web compatibility terms.


> > That's OK, because none of these certificates are publicly trusted, and
> > there's zero reason for a client to support all of the ill-considered
> > flexibility of 4055.
>
> that does not apply to software already released that can make those
> signatures with valid, publicly trusted keys
>
> also, just because those CAs are not trusted now, doesn't mean that they
> can't
> become trusted in the future through cross-signing agreements
>

Yes, it does mean that - we can set the policy now, and such CAs cannot
become 'retroactively valid'. That's already a goal of the policy, and why
root store inclusions already evaluate whether the existing CA has issued
non-BR or Mozilla-complying certs and, if so, requires a new infrastructure
to be split from the old infrastructure.

It seems like the crux of your objections to these common sense restraints
are:
- OpenSSL might have generated such certificates for internal purposes
- Noting that the lack of SPKI parsing means that the proposed
constraints are already not-interoperable
- This only applies to internal certificates
- CryptoAPI supports the 'existing' OIDs (and by default) and we need to
support that case
- CryptoAPI also doesn't include subjectAltNames by default, but the BRs
require it, so that's not a very compelling argument

Hubert Kario

unread,
Nov 28, 2017, 8:05:36 AM11/28/17
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
On Monday, 27 November 2017 23:37:59 CET Ryan Sleevi wrote:
> On Mon, Nov 27, 2017 at 4:51 PM, Hubert Kario <hka...@redhat.com> wrote:
> > > So no, we should not assume well-meaning actors, and we should be
> >
> > explicit
> >
> > > about what the "intention" of the RFCs is, and whether they actually
> > > achieve that.
> >
> > but we should achieve that by saying "do this", not "don't do this",
> > enumerating badness doesn't work - ask firewall people if you don't
> > believe
> > me.
> >
> > Or did we add to policy that keys revoked because they may haven been
> > compromised (heartbleed) can't be reused? Ever? Even by a different CA?
>
> You've completely misframed my proposal. I'm enumerating a specific
> whitelist of what is permitted. Every other option, unless otherwise
> permitted, is restricted. I'm even going to the level of proposing a
> byte-for-byte comparison function such that there's not even a prosaic
> whitelist - it's such that the policy is black and white and transcends
> language barriers by expressing directly in the technology.
>
> You're enumerating a blacklist - saying that all of the flexibility of 4055
> is permitted (except for these specific combinations), but propose to
> enforce neither of those through code or policy.

where did I do that?

it's the second time you're putting words in my mouth, I really do not
appreciate that.
signature.asc

Ryan Sleevi

unread,
Nov 28, 2017, 11:09:45 AM11/28/17
to Hubert Kario, Ryan Sleevi, dev-secur...@lists.mozilla.org
Hubert, while it's certainly not my intent to misrepresent your position, I
think it's worth remarking that in your reply immediately prior, you
highlighted that "but we should achieve that by saying 'Do this', not
"don't do this", enumerating badness doesn't work". This was similarly
putting words in my mouth - but, as I highlighted, it was a
misunderstanding, and tried to clarify.

To your question, the following statements were made earlier in the thread:
"" - issuing certificates may include RSA-PSS parameters in the Public Key
Info
Algorithm Identifier, it's recommended that the hash selected matches the
security of the key
- signature hash and the hash used for mask generation must be the same
both
in public key parameters in certificate and in signature parameters
- the salt length must equal at least 32 for SHA-256, 48 for SHA-384 and 64
bytes for SHA-512""

And yet, in a follow-up, you replied

""that would require hardcoding salt lengths, given their meaning in
subjectPublicKeyInfo, I wouldn't be too happy about it

looking at OpenSSL behaviour, it would likely render all past signatures
invalid and making signatures with already released software unnecessarily
complex (OpenSSL defaults to as large salt as possible)"

I hope you can see how these two are in conflict - on the one hand, you
suggest the policy should require X, but then suggest the implementation
should not enforce X, because it would invalidate OpenSSL signatures.

Similarly, with respect to the differences in our approaches, the framing
you put forward is:
""for X.509 only DER is allowed, if the tags or values are not encoded with
minimal number of bytes necessary, or with indeterminate length, it's not
DER
it's BER and that's strictly forbidden""

However, despite it being forbidden, the code contributed to NSS (and
mentioned in your original post -
https://bugzilla.mozilla.org/show_bug.cgi?id=1400844) does not actually
enforce this.

The fact that this new NSS implementation does not properly validate the
well-formedness of these signatures is somewhat in conflict with your
statement:
""it turned out to be a problem because a). it was the 90's, b). everybody
did
it like that, c). everybody assumed (not test) that security software was
written, well, securely..."""

So are we to conclude that this is still a problem because everybody
assumes, but does not test, that NSS is written, well, securely?

This is similarly an example of a policy 'requiring' X, but this is not
required through code or, with your proposed policy, required through
policy.

When I offered suggestions of how to avoid this, you seemingly rejected
them (when taking the message as a whole), with your suggestion being:
""or provide examples of specific encodings with explanations what can
change
and to what degree...""

Which is to afford the flexibility of 4055 by encoding a variety of
parameters - yet still in seeming direct conflict with the policy proposal
you yourself made.


These examples of internal inconsistencies are instead why I tried to focus
on first principles, and would like to revisit them. Framed differently:

1) Do you believe it represents a security risk to mix RSA-PKCS#1v1.5 and
RSA-PSS signatures with the same key?
2) Do you believe it represents a security risk to mix hash algorithms
within RSA-PSS signatures with the same key?

These questions are, perhaps, the crux of our disagreement. They should be
answered 'yes/no'. If they're yes, we should take specific steps to ensure
that risk is minimized. If that answer is no, we should avoid adding
complexity to the ecosystem. Hopefully that makes it a clear position.

Hubert Kario

unread,
Nov 29, 2017, 7:56:33 AM11/29/17
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
I provided a statement of my opinion, I didn't claim in it what your position
was.

You provided a statement that read "You're [...] propose to enforce neither of
those through code or policy". That's putting words in my mouth - words which
are exact opposites of my stance on the issue.

> To your question, the following statements were made earlier in the thread:
> "" - issuing certificates may include RSA-PSS parameters in the Public Key
> Info
> Algorithm Identifier, it's recommended that the hash selected matches the
> security of the key
> - signature hash and the hash used for mask generation must be the same
> both
> in public key parameters in certificate and in signature parameters
> - the salt length must equal at least 32 for SHA-256, 48 for SHA-384 and 64
> bytes for SHA-512""
>
> And yet, in a follow-up, you replied
>
> ""that would require hardcoding salt lengths, given their meaning in
> subjectPublicKeyInfo, I wouldn't be too happy about it
>
> looking at OpenSSL behaviour, it would likely render all past signatures
> invalid and making signatures with already released software unnecessarily
> complex (OpenSSL defaults to as large salt as possible)"
>
> I hope you can see how these two are in conflict - on the one hand, you
> suggest the policy should require X, but then suggest the implementation
> should not enforce X, because it would invalidate OpenSSL signatures.

they're not in conflict, the use of "at least" was deliberate in the first
quote

> Similarly, with respect to the differences in our approaches, the framing
> you put forward is:
> ""for X.509 only DER is allowed, if the tags or values are not encoded with
> minimal number of bytes necessary, or with indeterminate length, it's not
> DER
> it's BER and that's strictly forbidden""
>
> However, despite it being forbidden, the code contributed to NSS (and
> mentioned in your original post -
> https://bugzilla.mozilla.org/show_bug.cgi?id=1400844) does not actually
> enforce this.

then that's a bug that needs to be fixed.

Ryan, I have at least half a dozen of bugs on my name in b.m.o complaining
about wrong alert /descriptions/ being sent by NSS as response to malformed
packets. You really think that I won't be so through with rsa-pss in
certificates?

I already have bugs filed complaining of NSS allowing SHA-1 in MGF1 when sha-1
is disabled through policy!

> The fact that this new NSS implementation does not properly validate the
> well-formedness of these signatures is somewhat in conflict with your
> statement:
> ""it turned out to be a problem because a). it was the 90's, b). everybody
> did
> it like that, c). everybody assumed (not test) that security software was
> written, well, securely..."""
>
> So are we to conclude that this is still a problem because everybody
> assumes, but does not test, that NSS is written, well, securely?

I definitely do not assume that, I just do not consider the issues we suspect
to be there and issues we know about (and plan to fix in near future) to be
blockers for shipping it, because RSA-PSS is in its infancy on the public
'net.

I do think that they need to be resolved before TLS 1.3 is in its final state
though.

> This is similarly an example of a policy 'requiring' X, but this is not
> required through code or, with your proposed policy, required through
> policy.

requirement that the certificates be DER encoded is part of X.509 standard,
it's implicit

the policy doesn't state that the byte is 8 bits long or that 'a' is encoded
using big endian octet of decimal value 97 either...

> When I offered suggestions of how to avoid this, you seemingly rejected
> them (when taking the message as a whole), with your suggestion being:
> ""or provide examples of specific encodings with explanations what can
> change
> and to what degree...""
>
> Which is to afford the flexibility of 4055 by encoding a variety of
> parameters - yet still in seeming direct conflict with the policy proposal
> you yourself made.

Because I do not consider making the salt length rigid (one value allowed for
every hash) to be of any value. If it is not rigid, it would be silly to
provide a correct encoding for every single possible valid encoding.

OTOH, for salt length of 32 there would be one single correct encoding for
which we could "provide examples of" (as only SHA-256 is valid for such short
salt)

> These examples of internal inconsistencies are instead why I tried to focus
> on first principles, and would like to revisit them. Framed differently:

_perceived_ inconsistencies...

> 1) Do you believe it represents a security risk to mix RSA-PKCS#1v1.5 and
> RSA-PSS signatures with the same key?

depends on the circumstances

I consider RSA-PSS to be strictly more secure than PKCS#1 v1.5.

So if one already uses PKCS#1 v1.5 and adds RSA-PSS, it doesn't increase the
security of the system, but it doesn't decrease it either.

if one does use rsa-pss only (or has such intention), then use of pkcs#1 v1.5
with such key would lower the security of the system.

> 2) Do you believe it represents a security risk to mix hash algorithms
> within RSA-PSS signatures with the same key?

like I said before, the problem is not that the key can be used with sha-384
and sha-512 at the same time - that I don't see as a risk, at least not now

Use of the key with sha-1 and sha-384 at the same time, yes, I do, and I don't
think this should be allowed.

Now, one may expect that SHA-256 will go the way of SHA-1, then having ability
say "I won't use SHA-256", even when it is allowed by both policy and code is
a value-add.

> These questions are, perhaps, the crux of our disagreement. They should be
> answered 'yes/no'.

to paraphrase a great writer "Insufficient data for a 'yes/no' answer"
signature.asc

Ryan Sleevi

unread,
Nov 29, 2017, 11:01:54 AM11/29/17
to Hubert Kario, Ryan Sleevi, dev-secur...@lists.mozilla.org
On Wed, Nov 29, 2017 at 7:55 AM, Hubert Kario via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
>
>
> > The fact that this new NSS implementation does not properly validate the
> > well-formedness of these signatures is somewhat in conflict with your
> > statement:
> > ""it turned out to be a problem because a). it was the 90's, b).
> everybody
> > did
> > it like that, c). everybody assumed (not test) that security software was
> > written, well, securely..."""
> >
> > So are we to conclude that this is still a problem because everybody
> > assumes, but does not test, that NSS is written, well, securely?
>
> I definitely do not assume that, I just do not consider the issues we
> suspect
> to be there and issues we know about (and plan to fix in near future) to be
> blockers for shipping it, because RSA-PSS is in its infancy on the public
> 'net.
>
> I do think that they need to be resolved before TLS 1.3 is in its final
> state
> though.
>

My hope is that we'd prioritize bugs that affect security (SHA-1 in MGF1)
and stability (RSA-PSS laxness), especially if they're identified prior
to/within days of shipping.


>
> > This is similarly an example of a policy 'requiring' X, but this is not
> > required through code or, with your proposed policy, required through
> > policy.
>
> requirement that the certificates be DER encoded is part of X.509 standard,
> it's implicit
>

And yet it's so frequently been messed up that an entire section of
mozilla::pkix workarounds to address this.

The goal of policy is to learn from the mistakes of the past, to better
inform the future.

the policy doesn't state that the byte is 8 bits long or that 'a' is encoded
> using big endian octet of decimal value 97 either...
>

You're correct - but if CAs were frequently messing this up, or
requirements within similar sections, then I would just as ardently
advocate we include such in the policy.


> Because I do not consider making the salt length rigid (one value allowed
> for
> every hash) to be of any value. If it is not rigid, it would be silly to
> provide a correct encoding for every single possible valid encoding.
>

I do not consider making the salt length flexible to be of any value.
Further, I point to the past issues with respect to flexibility - and to
the many CVEs across a wide spectrum of clients, including NSS, with
respect to decoding signature parameters, that such flexibility will be
actively detrimental both to the implementation within Mozilla products and
to the implementation within the broader Web PKI community.

The extent of the argument for flexibility, so far, has been OpenSSL's
behaviour to produce RSA-PSS signatures with a maximal salt length. These
same clients are also incapable of parsing RSA-PSS SPKIs (that only came
recently, AFAICT). This probability of encountering such signatures within
the Web PKI itself is substantially lower, due to the many requirements
around protection of keys and the ease (or more aptly, difficulty) in
integrating such libraries with such systems - but even still, can be
configured by the client (nominally, the value of -1 indicates saltLen ==
len(hash), while -2 indicates the maximal encoding)


> > 1) Do you believe it represents a security risk to mix RSA-PKCS#1v1.5 and
> > RSA-PSS signatures with the same key?
>
> depends on the circumstances


> I consider RSA-PSS to be strictly more secure than PKCS#1 v1.5.
>
> So if one already uses PKCS#1 v1.5 and adds RSA-PSS, it doesn't increase
> the
> security of the system, but it doesn't decrease it either.
>

So are you stating you do not believe cross-algorithm attacks are relevant?
If it is, then it either theorhetically or practically weakens the security
of the system.
If it's not, then there's no need to afford the flexibility.


> if one does use rsa-pss only (or has such intention), then use of pkcs#1
> v1.5
> with such key would lower the security of the system.
>

Sure. But why does that intention - which cannot be technically enforced
and is itself related to the usage of the key, not the trust in the
signatures - need to be expressed in the certificate? I propose it doesn't
- which is where the need to express that intention introduces significant,
and unnecessary, complexity.


> > 2) Do you believe it represents a security risk to mix hash algorithms
> > within RSA-PSS signatures with the same key?
>
> like I said before, the problem is not that the key can be used with
> sha-384
> and sha-512 at the same time - that I don't see as a risk, at least not now
>

Again, cross-algorithm attacks.


> Use of the key with sha-1 and sha-384 at the same time, yes, I do, and I
> don't
> think this should be allowed.
>

But that's not a risk to the key - that's a risk being proposed in which
the client accepts SHA-1.


> Now, one may expect that SHA-256 will go the way of SHA-1, then having
> ability
> say "I won't use SHA-256", even when it is allowed by both policy and code
> is
> a value-add.
>

I disagree. I don't believe there's value in the expression of that from
the CA within the certificate, for the reasons I previously mentioned -
that intention can be subverted if the CA is willing to use SHA-1 or
SHA-256 when they declared they will not, or if the CA is not willing, then
it's introducing unnecessary complexity into the client ecosystem for
limited benefits.

The extent of which this provides value is to allow a CA who, bearing a
certificate whose public key is constrained to SHA-384, and upon being
presented a SHA-256 certificate signed by the associated private key, to
claim "That's not misissuance, because I said don't trust anything other
than SHA-384". The value of that statement is extremely questionable - the
fact that the CA used their private key to sign something which they now
disclaim is in and of itself problematic. The suggestion that it's somehow
mitigated on the client because the client would accept it even if the CA
screwed up and did it anyways belies the idea that the single most critical
thing CAs must due is protect their private key and limit what they sign.

This is why I have such a fundamental problem with the policy proposal, and
would rather see such restrictions be placed on the key and its operations
(for which a SHA-256 signature from a SHA-384 key is an act of active
misissuance, regardless if the client accepts it) than upon the certificate
(which can easily be subverted). Similarly, the notion that the need for
such flexibility in the signatures accepted is to support a half-complete
notion of backwards compatibility, or to allow signers the flexibility they
want to set their security policy, is to ignore the complexity being
imposed upon client implementations, and the number of bugs that have
resulted from having to support signer-friendly options rather than
verifier-friendly limitations.

Hubert Kario

unread,
Nov 29, 2017, 1:10:12 PM11/29/17
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
On Wednesday, 29 November 2017 17:00:58 CET Ryan Sleevi wrote:
> On Wed, Nov 29, 2017 at 7:55 AM, Hubert Kario via dev-security-policy <
>
> dev-secur...@lists.mozilla.org> wrote:
> > Because I do not consider making the salt length rigid (one value allowed
> > for
> > every hash) to be of any value. If it is not rigid, it would be silly to
> > provide a correct encoding for every single possible valid encoding.
>
> I do not consider making the salt length flexible to be of any value.
> Further, I point to the past issues with respect to flexibility - and to
> the many CVEs across a wide spectrum of clients, including NSS, with
> respect to decoding signature parameters, that such flexibility will be
> actively detrimental both to the implementation within Mozilla products and
> to the implementation within the broader Web PKI community.
>
> The extent of the argument for flexibility, so far, has been OpenSSL's
> behaviour to produce RSA-PSS signatures with a maximal salt length. These
> same clients are also incapable of parsing RSA-PSS SPKIs (that only came
> recently, AFAICT).

yes, it can't handle RSA-PSS SPKI, but it can handle RSA-PSS in signatures,
and my understanding is that we want the same kind of limitations for
signatures and for SPKI - if only to limit the confusion

> This probability of encountering such signatures within
> the Web PKI itself is substantially lower, due to the many requirements
> around protection of keys and the ease (or more aptly, difficulty) in
> integrating such libraries with such systems - but even still, can be
> configured by the client (nominally, the value of -1 indicates saltLen ==
> len(hash), while -2 indicates the maximal encoding)

Web PKI, now - yes
But the problem is that Microsoft CAs (for Active Directory) default to RSA-
PSS signatures, which means that Firefox cannot be deployed on such internal
networks.
This does not help Firefox market share.

and I'm not saying that it is not possible to create signatures with correct
salt lengths with old OpenSSL - but it is not the default, so any kind of
certificates that were created previously will likely use the default. That in
turn means that it would require reprovisioning all certificates like that,
not a task that is easy at scale, welcome or with any benefit from the PoV of
users.

> > > 1) Do you believe it represents a security risk to mix RSA-PKCS#1v1.5
> > > and
> > > RSA-PSS signatures with the same key?
> >
> > depends on the circumstances
> >
> >
> > I consider RSA-PSS to be strictly more secure than PKCS#1 v1.5.
> >
> > So if one already uses PKCS#1 v1.5 and adds RSA-PSS, it doesn't increase
> > the
> > security of the system, but it doesn't decrease it either.
>
> So are you stating you do not believe cross-algorithm attacks are relevant?

No, I don't believe that cross-algorithm attacks from RSA-PSS to PKCS#1 v1.5
are likely.

I do consider users of PKCS#1 v1.5 to be vulnerable to attacks that can be
leveraged against both PKCS#1 v1.5 and RSA-PSS

> If it is, then it either theorhetically or practically weakens the security
> of the system.
> If it's not, then there's no need to afford the flexibility.
>
> > if one does use rsa-pss only (or has such intention), then use of pkcs#1
> > v1.5
> > with such key would lower the security of the system.
>
> Sure. But why does that intention - which cannot be technically enforced

RSA-PSS OID in SPKI does exactly that, what do you mean "cannot be technically
enforced"?

> and is itself related to the usage of the key, not the trust in the
> signatures - need to be expressed in the certificate?

If the certificates has SPKI with RSA-PSS id, that means exactly that - never
trust PKCS#1 v1.5 signatures made with this key.

> I propose it doesn't
> - which is where the need to express that intention introduces significant,
> and unnecessary, complexity.

I really don't think we're on the same page here...

> > > 2) Do you believe it represents a security risk to mix hash algorithms
> > > within RSA-PSS signatures with the same key?
> >
> > like I said before, the problem is not that the key can be used with
> > sha-384
> > and sha-512 at the same time - that I don't see as a risk, at least not
> > now
>
> Again, cross-algorithm attacks.
>
> > Use of the key with sha-1 and sha-384 at the same time, yes, I do, and I
> > don't
> > think this should be allowed.
>
> But that's not a risk to the key - that's a risk being proposed in which
> the client accepts SHA-1.
>
> > Now, one may expect that SHA-256 will go the way of SHA-1, then having
> > ability
> > say "I won't use SHA-256", even when it is allowed by both policy and code
> > is
> > a value-add.
>
> I disagree. I don't believe there's value in the expression of that from
> the CA within the certificate, for the reasons I previously mentioned -
> that intention can be subverted if the CA is willing to use SHA-1 or
> SHA-256 when they declared they will not, or if the CA is not willing, then
> it's introducing unnecessary complexity into the client ecosystem for
> limited benefits.

If the RSA-PSS parameters in SPKI say SHA-256, SHA-1 signature made with such
certificates never was and never will be valid. So creating SHA-1 signatures
is useless from point of view of legitimate CA.

It's like having technically constrained CA limited to .com domain and issuing
certificate for .org domain - no valid PKIX implementation will trust them.

> The extent of which this provides value is to allow a CA who, bearing a
> certificate whose public key is constrained to SHA-384, and upon being
> presented a SHA-256 certificate signed by the associated private key, to
> claim "That's not misissuance, because I said don't trust anything other
> than SHA-384". The value of that statement is extremely questionable - the
> fact that the CA used their private key to sign something which they now
> disclaim is in and of itself problematic.

even if that happened, what use would have such a certificate have? it
wouldn't be accepted by any implementation that follows RFC 5756...

also my intention was slightly different: "Don't trust anything different,
because I will never use anything different". So if anything different shows
up, that likely means key compromise (or failure in internal procedures).

> This is why I have such a fundamental problem with the policy proposal, and
> would rather see such restrictions be placed on the key and its operations
> (for which a SHA-256 signature from a SHA-384 key is an act of active
> misissuance, regardless if the client accepts it) than upon the certificate
> (which can easily be subverted)

I see those limitations as being placed on a particular public key - though
spelling it out in policy is probably good idea

> . Similarly, the notion that the need for
> such flexibility in the signatures accepted is to support a half-complete
> notion of backwards compatibility, or to allow signers the flexibility they
> want to set their security policy, is to ignore the complexity being
> imposed upon client implementations, and the number of bugs that have
> resulted from having to support signer-friendly options rather than
> verifier-friendly limitations.

even if we forbid RSA_PSS parameters (require them to be omitted) in SPKI but
allow RSA-PSS OID in SPKI that still leaves the problem of variable salt
length
signature.asc

Ryan Sleevi

unread,
Nov 29, 2017, 4:00:25 PM11/29/17
to Hubert Kario, Ryan Sleevi, dev-secur...@lists.mozilla.org
On Wed, Nov 29, 2017 at 1:09 PM, Hubert Kario <hka...@redhat.com> wrote:

> > The extent of the argument for flexibility, so far, has been OpenSSL's
> > behaviour to produce RSA-PSS signatures with a maximal salt length. These
> > same clients are also incapable of parsing RSA-PSS SPKIs (that only came
> > recently, AFAICT).
>
> yes, it can't handle RSA-PSS SPKI, but it can handle RSA-PSS in signatures,
> and my understanding is that we want the same kind of limitations for
> signatures and for SPKI - if only to limit the confusion
>

Correct, the behaviour of OpenSSL is to unfortunately take maximal
advantage of the unnecessary complexity of RFC 4055.

I do not feel that it is in the best interest of users or security to
interoperate with that decision.


> > This probability of encountering such signatures within
> > the Web PKI itself is substantially lower, due to the many requirements
> > around protection of keys and the ease (or more aptly, difficulty) in
> > integrating such libraries with such systems - but even still, can be
> > configured by the client (nominally, the value of -1 indicates saltLen ==
> > len(hash), while -2 indicates the maximal encoding)
>
> Web PKI, now - yes
> But the problem is that Microsoft CAs (for Active Directory) default to
> RSA-
> PSS signatures, which means that Firefox cannot be deployed on such
> internal
> networks.
> This does not help Firefox market share.
>

Let's be precise: My proposal does not change the status quo. It does not
improve the situation as deployed, I readily admit, but it does not make
the entire ecosystem worse because of it.

Further, let's not conflate Microsoft CA generated certificates - which is
what you're noting as the internal networks - with OpenSSL generated
certificates - which have no such data as to the extent of the problem.

Today's status quo: If you have deployed RSA-PSS certificates on your
network, browsers such as Chrome or Firefox, and systems such as macOS or
Android, do not accept such certificates. You need to replace such
certificates.
Tomorrow's status quo: If you have deployed RSA-PSS certificates on your
network, and they do not reflect a sensible security configuration,
browsers such as Chrome or Firefox, and systems such as macOS or Android,
do not accept such certificates. You'd need to replace such certificates -
but you would be able to do so with 'sensible' RSA-PSS configuration.


> and I'm not saying that it is not possible to create signatures with
> correct
> salt lengths with old OpenSSL - but it is not the default, so any kind of
> certificates that were created previously will likely use the default.
> That in
> turn means that it would require reprovisioning all certificates like that,
> not a task that is easy at scale, welcome or with any benefit from the PoV
> of
> users.
>

Again, I think this is an extremely poor argument for introducing global
complexity into the ecosystem, and known-dangerous anti-patterns into
libraries such as NSS.

That is, when I weigh the risk of a limited set of Enterprise users (which
we know is limited, given the above constraints and the lack of prevalence
of OpenSSL in the CA space), I do not feel that it is reasonable or
responsible to suggest that we must accept undue complexity because they
crapped the proverbial bed first by shipping a non-sensical configuration.


> > So are you stating you do not believe cross-algorithm attacks are
> relevant?
>
> No, I don't believe that cross-algorithm attacks from RSA-PSS to PKCS#1
> v1.5
> are likely.
>
> I do consider users of PKCS#1 v1.5 to be vulnerable to attacks that can be
> leveraged against both PKCS#1 v1.5 and RSA-PSS
>

I'm really not sure how to parse your response. I'm not sure if your "No"
is your answer - as in you don't believe they're relevant - or "No" as you
disagreeing with my framing of the question and that "Yes", you do believe
they're relevant, despite them not being likely.

I'm further not sure how to parse your remark about "users of PKCS#1 v1.5"
- as to whether you mean the signers or the verifiers.


> > Sure. But why does that intention - which cannot be technically enforced
>
> RSA-PSS OID in SPKI does exactly that, what do you mean "cannot be
> technically
> enforced"?
>

It does not do prevent such a signature from being created - that's what I
mean.

If it doesn't prevent such a signature, and the act of signing weakens the
key itself, then such a constraint is pointless.
If it simply indicates to a client "Don't accept this signature", but the
act of signing weakens the key still, then such a constraint is pointless.
If it does not weaken the key, then indicating to the client such a
constraint is pointless, because any client that encounters such a
signature has proof that the CA has failed to abide by the policy of their
key - and if so, everything used with that key should be rightfully called
into question, and no 'damage mitigation' has been achieved.


> > and is itself related to the usage of the key, not the trust in the
> > signatures - need to be expressed in the certificate?
>
> If the certificates has SPKI with RSA-PSS id, that means exactly that -
> never
> trust PKCS#1 v1.5 signatures made with this key.
>

Yes. And expressing that is pointless (see above).


> > I disagree. I don't believe there's value in the expression of that from
> > the CA within the certificate, for the reasons I previously mentioned -
> > that intention can be subverted if the CA is willing to use SHA-1 or
> > SHA-256 when they declared they will not, or if the CA is not willing,
> then
> > it's introducing unnecessary complexity into the client ecosystem for
> > limited benefits.
>
> If the RSA-PSS parameters in SPKI say SHA-256, SHA-1 signature made with
> such
> certificates never was and never will be valid. So creating SHA-1
> signatures
> is useless from point of view of legitimate CA.
>
> It's like having technically constrained CA limited to .com domain and
> issuing
> certificate for .org domain - no valid PKIX implementation will trust them.
>

I understand - but I'm saying that what we're discussing fundamentally
relates to the key.

If a CA was technically constrained to .com, said they would never issue
for something not .com, but then issued for .org, it's absolutely grounds
for concern.
If it doesn't matter whether a CA issues for .com or .org, and they simply
choose to only issue for .com, then it doesn't make sense to require
clients to support the CA's expression of intent if doing so introduces
security risk to clients (and it does) and complexity risk.



>
> > The extent of which this provides value is to allow a CA who, bearing a
> > certificate whose public key is constrained to SHA-384, and upon being
> > presented a SHA-256 certificate signed by the associated private key, to
> > claim "That's not misissuance, because I said don't trust anything other
> > than SHA-384". The value of that statement is extremely questionable -
> the
> > fact that the CA used their private key to sign something which they now
> > disclaim is in and of itself problematic.
>
> even if that happened, what use would have such a certificate have? it
> wouldn't be accepted by any implementation that follows RFC 5756...
>
> also my intention was slightly different: "Don't trust anything different,
> because I will never use anything different". So if anything different
> shows
> up, that likely means key compromise (or failure in internal procedures).
>

I understand your intent is to capture "Don't trust if I screw up". I'm
saying that the value of such a statement is non-existent for the
complexity required to support that statement, and that encountering
anything contrary to that statement, as you state, likely means key
compromise.

You're focusing on "Well, we could reject that certificate" - I'm focusing
on "Yes, but they did something stupid with their key, and now we have to
question every certificate". Further, if that doing something stupid with
the key undermines the security of the key itself (e.g. through
cross-algorithm attacks), then it doesn't matter what the cert says - we
want to make sure they do what they do.

Put differently yet again, I think there's no value in a CA saying "Don't
trust me if I can't protect my key" because it's already implied that we
don't trust a CA that can't protect its key. Further, the extent of which
it matters to the risk of clients - what happens if something is SHA-256
signed rather than SHA-384 - is only a property of whether that signature
algorithm is itself weak, and if it is, clients themselves should be
disabling that algorithm, rather than requiring on the CA's attestations.

Don't get me wrong, I'm supportive of restricting CA's ability to do
damage. I'm posing that either there's no ability to do damage (meaning the
cross-hash-algorithm usage is not a security risk) or that the damage has
already been done (by failing to control the key). This is very different
than something like nameConstraints or EKUs - this is about the key itself.
That's my point.


> even if we forbid RSA_PSS parameters (require them to be omitted) in SPKI
> but
> allow RSA-PSS OID in SPKI that still leaves the problem of variable salt
> length


Yes. And I don't believe that supporting such a variable salt length is a
good idea compared to the complexity it introduces, and I believe that the
compatibility risk (with OpenSSL) has been significantly overstated as to
its practical impact, given the state of the ecosystem.

Hubert Kario

unread,
Nov 30, 2017, 12:22:33 PM11/30/17
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
On Wednesday, 29 November 2017 21:59:39 CET Ryan Sleevi wrote:
> On Wed, Nov 29, 2017 at 1:09 PM, Hubert Kario <hka...@redhat.com> wrote:
> > > So are you stating you do not believe cross-algorithm attacks are
> >
> > relevant?
> >
> > No, I don't believe that cross-algorithm attacks from RSA-PSS to PKCS#1
> > v1.5
> > are likely.
> >
> > I do consider users of PKCS#1 v1.5 to be vulnerable to attacks that can be
> > leveraged against both PKCS#1 v1.5 and RSA-PSS
>
> I'm really not sure how to parse your response. I'm not sure if your "No"
> is your answer - as in you don't believe they're relevant - or "No" as you
> disagreeing with my framing of the question and that "Yes", you do believe
> they're relevant, despite them not being likely.
>
> I'm further not sure how to parse your remark about "users of PKCS#1 v1.5"
> - as to whether you mean the signers or the verifiers.

if the certificate is usable with PKCS#1 v1.5 signatures, it makes it
vulnerable to attacks like the Bleichenbacher, if it is not usable with PKCS#1
v1.5 it's not vulnerable in practice to such attacks

> > > Sure. But why does that intention - which cannot be technically enforced
> >
> > RSA-PSS OID in SPKI does exactly that, what do you mean "cannot be
> > technically
> > enforced"?
>
> It does not do prevent such a signature from being created - that's what I
> mean.

now you're being silly

the Mozilla policy doesn't prohibit the CA from making a service that will
perform RSA private key operation on arbitrary strings either but we would
still revoke trust from CA that would do such a thing, because just the fact
that such a service _could_ have been used maliciously (to sign arbitrary
certificates) is enough to distrust it!

> If it doesn't prevent such a signature, and the act of signing weakens the
> key itself, then such a constraint is pointless.
> If it simply indicates to a client "Don't accept this signature", but the
> act of signing weakens the key still, then such a constraint is pointless.
> If it does not weaken the key, then indicating to the client such a
> constraint is pointless, because any client that encounters such a
> signature has proof that the CA has failed to abide by the policy of their
> key - and if so, everything used with that key should be rightfully called
> into question, and no 'damage mitigation' has been achieved.

I don't think that it's the act of making the signature that weakens the key

If I generate a certificate has RSA-PSS SPKI I won't be able to use it for
PKCS#1 v1.5 signatures - no widely deployed server will use it like that and
no widely deployed client will accept such use. So a chance that a key will be
abused like that and will remain abused like that is negligible.

and that's the whole point - defence in depth - to make it another hurdle to
jump through if you want to do something stupid
so where's the bug that removes support for that in Firefox? Will we get it in
before the next ESR? /s

> > > The extent of which this provides value is to allow a CA who, bearing a
> > > certificate whose public key is constrained to SHA-384, and upon being
> > > presented a SHA-256 certificate signed by the associated private key, to
> > > claim "That's not misissuance, because I said don't trust anything other
> > > than SHA-384". The value of that statement is extremely questionable -
> >
> > the
> >
> > > fact that the CA used their private key to sign something which they now
> > > disclaim is in and of itself problematic.
> >
> > even if that happened, what use would have such a certificate have? it
> > wouldn't be accepted by any implementation that follows RFC 5756...
> >
> > also my intention was slightly different: "Don't trust anything different,
> > because I will never use anything different". So if anything different
> > shows
> > up, that likely means key compromise (or failure in internal procedures).
>
> I understand your intent is to capture "Don't trust if I screw up". I'm
> saying that the value of such a statement is non-existent for the
> complexity required to support that statement, and that encountering
> anything contrary to that statement, as you state, likely means key
> compromise.
>
> You're focusing on "Well, we could reject that certificate" - I'm focusing
> on "Yes, but they did something stupid with their key, and now we have to
> question every certificate".

so?

you're arguing that machine readable CA policy is a _bad thing_ ?

> Further, if that doing something stupid with
> the key undermines the security of the key itself (e.g. through
> cross-algorithm attacks), then it doesn't matter what the cert says - we
> want to make sure they do what they do.

so what is it? either it's "technically impossible" or "we want to make sure
they do what they do"

> Put differently yet again, I think there's no value in a CA saying "Don't
> trust me if I can't protect my key" because it's already implied that we
> don't trust a CA that can't protect its key. Further, the extent of which
> it matters to the risk of clients - what happens if something is SHA-256
> signed rather than SHA-384 - is only a property of whether that signature
> algorithm is itself weak, and if it is, clients themselves should be
> disabling that algorithm, rather than requiring on the CA's attestations.
>
> Don't get me wrong, I'm supportive of restricting CA's ability to do
> damage. I'm posing that either there's no ability to do damage (meaning the
> cross-hash-algorithm usage is not a security risk) or that the damage has
> already been done (by failing to control the key). This is very different
> than something like nameConstraints or EKUs - this is about the key itself.
> That's my point.

you're missing one thing: we don't know if it truly, positively, absolutely
isn't a problem

it probably isn't

but it's not a certainty

(I can see the world black and white too!)

> > even if we forbid RSA_PSS parameters (require them to be omitted) in SPKI
> > but
> > allow RSA-PSS OID in SPKI that still leaves the problem of variable salt
> > length
>
> Yes. And I don't believe that supporting such a variable salt length is a
> good idea compared to the complexity it introduces, and I believe that the
> compatibility risk (with OpenSSL) has been significantly overstated as to
> its practical impact, given the state of the ecosystem.

"What a good point you're making Hubert! Please wait a moment while I look
through certificates posted on https://scans.io/ looking for ones that use
RSA-PSS signatures"

"no problem, that would be appreciated"

"Thank you for waiting, among the ones that are exposed on the Internet I
found ~5 used on Azure (all signed with SHA-256 and 32byte salt), few (~20)
that look like sip gateways (most signed with SHA-256 and 32byte salt, some
with SHA-1 and 20 byte salt), and one that had SHA-384 signature and 48 byte
long salt. I didn't find a single one that used salt length that was different
than hash length."

"ah, yes, you were right Ryan, it doesn't look like using rigid salt lengths
would negatively impact already deployed environments, at least the most
common ones"
signature.asc

Ryan Sleevi

unread,
Nov 30, 2017, 12:46:53 PM11/30/17
to Hubert Kario, Ryan Sleevi, dev-secur...@lists.mozilla.org
On Thu, Nov 30, 2017 at 12:21 PM, Hubert Kario <hka...@redhat.com> wrote:

> if the certificate is usable with PKCS#1 v1.5 signatures, it makes it
> vulnerable to attacks like the Bleichenbacher, if it is not usable with
> PKCS#1
> v1.5 it's not vulnerable in practice to such attacks
>

A certificate does not produce signatures - a key does.
A certificate carries signatures, but only relevant to verifiers.

Your reference to Bleichenbacher again makes it unclear if you're
expressing a concern about the protection of the private key or the quality
of the signatures, or whether you're conflating with ciphersuite
negotiation and encryption.


> > It does not do prevent such a signature from being created - that's what
> I
> > mean.
>
> now you're being silly
>
> the Mozilla policy doesn't prohibit the CA from making a service that will
> perform RSA private key operation on arbitrary strings either but we would
> still revoke trust from CA that would do such a thing, because just the
> fact
> that such a service _could_ have been used maliciously (to sign arbitrary
> certificates) is enough to distrust it!
>

No, such CAs do exist, and we haven't revoked such trust. Your statement is
empirically false.

That's why I'm trying to be explicit in the policy here - that the policy
is about the keys, not the certificates.


> I don't think that it's the act of making the signature that weakens the
> key
>

Great, then we don't need to restrict keys.


> If I generate a certificate has RSA-PSS SPKI I won't be able to use it for
> PKCS#1 v1.5 signatures - no widely deployed server will use it like that
> and
> no widely deployed client will accept such use. So a chance that a key
> will be
> abused like that and will remain abused like that is negligible.
>
> and that's the whole point - defence in depth - to make it another hurdle
> to
> jump through if you want to do something stupid
>

That's an argument often made for unnecessary complexity - or more aptly,
the benefits of such defense in depth needs to be weighed against how that
cost is distributed. In the case of RSA-PSS, because of the inanity of RFC
4055, that cost is distributed unevenly - it places the burden of such
complexity upon the clients, which have repeatedly lead to exploits (and
again, NSS has itself been vulnerable to this several times, or deftly
avoided it due to Brian Smith's keen rejection of unnecessary complexity).

It is a joint we don't need, and whose oiling will be needless toil.

It also adds complexity to the CA ecosystem, for CAs who are unfortunately
incapable of complying with the RFCs unless their hands are held and they
are carefully guided down the happy path. To the extent those CAs should
not be trusted is easy to say, but as shown time and time again, it's hard
to express "You must be this competent to ride" in an objective policy, and
even harder to address the situation where the CA is trusted and then let's
all the competent staff go.


>
> > I understand - but I'm saying that what we're discussing fundamentally
> > relates to the key.
> >
> > If a CA was technically constrained to .com, said they would never issue
> > for something not .com, but then issued for .org, it's absolutely grounds
> > for concern.
> > If it doesn't matter whether a CA issues for .com or .org, and they
> simply
> > choose to only issue for .com, then it doesn't make sense to require
> > clients to support the CA's expression of intent if doing so introduces
> > security risk to clients (and it does) and complexity risk.
>
> so where's the bug that removes support for that in Firefox? Will we get
> it in
> before the next ESR? /s
>

You've again misunderstood the argument. But I'm not sure that there's much
value in continuing - I've attempted to get you to either outline the value
proposition or to outline the risks. We've danced around on semantic games,
but we've identified that there is no risk in comingling these hash
algorithms, that the only risk in being strict relates to a single software
product with an unknown (but understandably miniscule, by virtue of no bugs
being filed) number of extant certificates, while we have a profound
opportunity to do the Right Thing based on lessons repeatedly learned.

you're arguing that machine readable CA policy is a _bad thing_ ?
>

Yes, when the method of expressing that policy is unnecessary complex,
burdensome, and error prone for both clients and servers.

RFC 4055 is a bad RFC. It is a host of unnecessary joints and unnecessary
complexity that does not, in practice, add any modicum of 'future
compatibility' as it tried to do, does not in practice reduce any risks (as
it tried to do), and does not provide any tangible value over a sane
expression of that via policy.

Hubert Kario

unread,
Nov 30, 2017, 3:24:00 PM11/30/17
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
On Thursday, 30 November 2017 18:46:12 CET Ryan Sleevi wrote:
> On Thu, Nov 30, 2017 at 12:21 PM, Hubert Kario <hka...@redhat.com> wrote:
> > if the certificate is usable with PKCS#1 v1.5 signatures, it makes it
> > vulnerable to attacks like the Bleichenbacher, if it is not usable with
> > PKCS#1
> > v1.5 it's not vulnerable in practice to such attacks
>
> A certificate does not produce signatures - a key does.
> A certificate carries signatures, but only relevant to verifiers.

and verifiers are who enforces if the signatures are sane
and for verifiers only the certificate is visible, not private key
and certificate for the user of the key is essentially a blob, not something
he or she can edit, so it is a commitment (even for CA, as that self signed
one is no longer in control of it)

> Your reference to Bleichenbacher again makes it unclear if you're
> expressing a concern about the protection of the private key or the quality
> of the signatures, or whether you're conflating with ciphersuite
> negotiation and encryption.

key with rsaEncryption SPKI can be used for both signatures and encryption,
key with RSA-PSS SPKI can't be used for encryption if at least one party is
standards compliant

and the only way the other party can tell if the key is a rsa or a rsa-pss key
is by looking at the certificate

so if the _private_ key is rsa or rsa-pss type is of purely philosophical
concern

> > > It does not do prevent such a signature from being created - that's what
> >
> > I
> >
> > > mean.
> >
> > now you're being silly
> >
> > the Mozilla policy doesn't prohibit the CA from making a service that will
> > perform RSA private key operation on arbitrary strings either but we would
> > still revoke trust from CA that would do such a thing, because just the
> > fact
> > that such a service _could_ have been used maliciously (to sign arbitrary
> > certificates) is enough to distrust it!
>
> No, such CAs do exist, and we haven't revoked such trust. Your statement is
> empirically false.
>
> That's why I'm trying to be explicit in the policy here - that the policy
> is about the keys, not the certificates.

but the private keys are invisible and they should be invisible to anyone but
the owner! the only way to check if they are used according to their intention
is to look at the corresponding certificate!

it's fully an implementation's detail whether implementation stores the rsa-
pss params with the private key or derives them from associated certificate -
it doesn't matter

> > I don't think that it's the act of making the signature that weakens the
> > key
>
> Great, then we don't need to restrict keys.

that's cherry picking and taking sentences out of context...

> > > I understand - but I'm saying that what we're discussing fundamentally
> > > relates to the key.
> > >
> > > If a CA was technically constrained to .com, said they would never issue
> > > for something not .com, but then issued for .org, it's absolutely
> > > grounds
> > > for concern.
> > > If it doesn't matter whether a CA issues for .com or .org, and they
> >
> > simply
> >
> > > choose to only issue for .com, then it doesn't make sense to require
> > > clients to support the CA's expression of intent if doing so introduces
> > > security risk to clients (and it does) and complexity risk.
> >
> > so where's the bug that removes support for that in Firefox? Will we get
> > it in
> > before the next ESR? /s
>
> You've again misunderstood the argument. But I'm not sure that there's much
> value in continuing - I've attempted to get you to either outline the value
> proposition or to outline the risks.

the only risk you're bringing up again and again is a nebulous "but it's hard
to do, so we may make mistakes"

guess what: that's the case for all of crypto, all of network protocols,
that's why we repeat the mantra of "don't implement crypto yourself", it's
silly to bring it up in the first place

> We've danced around on semantic games,

And how am I supposed to answer to that without invoking an ad hominem?

> but we've identified that there is no risk in comingling these hash
> algorithms,

no, there is risk, we just can't quantify it

> that the only risk in being strict relates to a single software
> product with an unknown (but understandably miniscule, by virtue of no bugs
> being filed) number of extant certificates, while we have a profound
> opportunity to do the Right Thing based on lessons repeatedly learned.

that's about RSA-PSS parameters in SPKI or about RSA-PSS OID in SPKI?

> > you're arguing that machine readable CA policy is a _bad thing_ ?
>
>
> Yes, when the method of expressing that policy is unnecessary complex,
> burdensome, and error prone for both clients and servers.

I already said I'm ok with the hardcoded encoding of the RSA-PSS params, with
salt sizes assigned to hash sizes, and I started by saying that I'm ok with
forbidding RSA-PSS params in EE certificate SPKI, that's not enough for you?
signature.asc

Ryan Sleevi

unread,
Nov 30, 2017, 3:50:25 PM11/30/17
to Hubert Kario, Ryan Sleevi, dev-secur...@lists.mozilla.org
Then I think this is an incredibly convoluted concern.

If I'm understanding correctly - and I hope you can correct me if I'm not -
the view is that it is valuable to limit a public key via an unnecessary
complex encoding scheme so that it does not get used for encryption (in
what protocol? Obviously not X.509 - so presumably certificates) so that it
is robust against CCA like Bleichenbacher?

It feels like I just spewed out words writing that out, because it does not
seem like it fits a consistent or coherent threat model, and so surely I
must be missing something.

It does feel like again the argument is The CA/EE should say 'I won't do X'
so that a client won't accept a signature if the CA does X, except it
doesn't change the security properties at all if the CA/EE does actually do
X, and the only places it does affect the security properties are either
already addressed (e.g. digitalSignature EKU) or themselves not protected
by the proposed mechanism.

This doesn't make sense. It's not good policy.


>
> > No, such CAs do exist, and we haven't revoked such trust. Your statement
> is
> > empirically false.
> >
> > That's why I'm trying to be explicit in the policy here - that the policy
> > is about the keys, not the certificates.
>
> but the private keys are invisible and they should be invisible to anyone
> but
> the owner! the only way to check if they are used according to their
> intention
> is to look at the corresponding certificate!
>

I thought I already demonstrated that this was false, because the statement
of capability is n capabilities to 1 key, and who makes those statements is
not the private key.

And that intention can be expressed through means other than the inanity of
RFC 4055 to begin with, or that intention is an unnecessary option imposed
on clients to satisfy the whims of the signature producer, and we should
favor clients over producers.


> it's fully an implementation's detail whether implementation stores the
> rsa-
> pss params with the private key or derives them from associated
> certificate -
> it doesn't matter
>

And this is where I disagree - the fact that deriving them from the
associated certificate imposes a significant cost upon clients is something
to be concerned about and worthy of prohibition. There's no need to
outsource the implementation cost upon the ecosystem for what can and
ostensibly should be the CA's ability to control.


> > > I don't think that it's the act of making the signature that weakens
> the
> > > key
> >
> > Great, then we don't need to restrict keys.
>
> that's cherry picking and taking sentences out of context...
>

I don't think it is.

There has been a constant conflation between a desire to:
1) Ensure that a sufficient level of security is afforded all clients
against known risks
2) Allow producers flexibility to implement their arbitrary security
policies

I disagree that 2 should be a consideration, unless it comes without client
cost. In the case of RFC 4055, it comes with substantial client cost (as
already demonstrated by the missteps made by NSS, past and present), and so
such flexibility is unreasonably complex. We've also seemingly reached
agreement that, with respect to 1, it's either a matter of client policy
(to accept or reject a given 'weak' signature) or that it's a matter of
cross-algorithm attacks (which we've agreed doesn't undermine security)

So 1 doesn't apply, 2 is too costly, and clients matter more than producers.


> the only risk you're bringing up again and again is a nebulous "but it's
> hard
> to do, so we may make mistakes"
>
> guess what: that's the case for all of crypto, all of network protocols,
> that's why we repeat the mantra of "don't implement crypto yourself", it's
> silly to bring it up in the first place
>

We also have the mantra "Have one joint, and keep it oiled". The lessons we
saw from the conversion of TLS 1.2 to TLS 1.3 repeatedly underscore the
need to pursue simplicitly over unnecessary flexibility. You have yet to
articulate any argument in favor of that flexibility either than "someone
might want it" and "a mostly unused for this case system started using it".
Those are poor arguments.

Meanwhile, we have identified past security bugs and present compatibility
bugs from such flexibility, we've taken intentional steps in the past to
curtail such flexibility (c.f. the change in how NSS validates
RSA-PKCS#1v1.5 signatures to be memcmp based rather than decodebased), and
yet, seemingly, somehow, this time we'll get it right - even though it's
already been done wrong.


> that's about RSA-PSS parameters in SPKI or about RSA-PSS OID in SPKI?
>

Both!

I think the 'correct' solution from a policy perspective, given these
constraints, is:
- rsaEncryption as SPKI is perfectly fine (and, indeed, the only one that
interoperates)
- I would even go as far as to say rsaEncryption as SPKI should be
*required*, as anything else is merely an expression of intent that only
matters if the private key control has been lost or confused
- The policy itself should express that while the certificate may not
express the intent, any other use of the associated private key is a
fundamental trust violation, regardless of whether or not clients will
accept it (again, because it means you've lost control of the key / cannot
keep it constrained as you intended to)
- If allowed, it should be fully absent or byte-for-byte identical to the
signature
- rsaPSS as the signature
- It should have byte-for-byte encodings of the blessed forms for PSS +
SHA-256/384/512
- This means no salt size variability as an unnecessary complexity
that imposes decoding costs upon the client
- It should be byte-for-byte matched with the SPKI if PSS-in-SPKI is
permitted and present
- Any violation of that (tbsCert.signature / tbsCertList.signature !=
issuer.SPKI) is misissuance. Regardless of whether a 4055 client would
accept/reject it

Hubert Kario

unread,
Dec 1, 2017, 7:34:54 AM12/1/17
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
I discussed it with Bob Relyea, Daiki Ueno, Nikos Mavrogiannopoulos and they
see it as a valid concern and acceptable solution.

My feeling of the discussion on the TLS WG mailing about the same topic was
the same - that prohibiting use of key for RSA key exchange has value.
(continued below)

> It does feel like again the argument is The CA/EE should say 'I won't do X'
> so that a client won't accept a signature if the CA does X, except it
> doesn't change the security properties at all if the CA/EE does actually do
> X, and the only places it does affect the security properties are either
> already addressed (e.g. digitalSignature EKU) or themselves not protected
> by the proposed mechanism.

a). I think you're talking about Key Usage, not Extended Key Usage
b). digitalSignature is a Key Usage, not Extended Key Usage bit
c). Extended Key Usage has only one flag for use in TLS - serverAuth - which
doesn't say anything about applicability of the key for SKE signature but not
RSA key exchange
d). show me the clients that actually honour the Key Usage flags for TLS in a
way that prevents use of certificate with rsaEncryption SPKI for RSA key
exchange

so, yes, I'm afraid that you "must be missing something"

> > that's about RSA-PSS parameters in SPKI or about RSA-PSS OID in SPKI?
>
> Both!
>
> I think the 'correct' solution from a policy perspective, given these
> constraints, is:
> - rsaEncryption as SPKI is perfectly fine (and, indeed, the only one that
> interoperates)
> - I would even go as far as to say rsaEncryption as SPKI should be
> *required*, as anything else is merely an expression of intent that only
> matters if the private key control has been lost or confused

that's your opinion, not the community consensus

> - The policy itself should express that while the certificate may not
> express the intent, any other use of the associated private key is a
> fundamental trust violation, regardless of whether or not clients will
> accept it (again, because it means you've lost control of the key / cannot
> keep it constrained as you intended to)

I have no problem with phrasing it primarily in terms of policy and allowing
for that policy to be duplicated in CA's certificate SPKI

> - If allowed, it should be fully absent or byte-for-byte identical to the
> signature

again, no problem

> - rsaPSS as the signature
> - It should have byte-for-byte encodings of the blessed forms for PSS +
> SHA-256/384/512

no problem

> - This means no salt size variability as an unnecessary complexity
> that imposes decoding costs upon the client

no problem

> - It should be byte-for-byte matched with the SPKI if PSS-in-SPKI is
> permitted and present

of the issuing CA certificate, yes

> - Any violation of that (tbsCert.signature / tbsCertList.signature !=
> issuer.SPKI) is misissuance. Regardless of whether a 4055 client would
> accept/reject it

that was my intent

so I think we're done here - from what I can tell the only material change is
that the variable salt length is disallowed and that the requirements are
spelled out it terms of policy, not technical limitations
signature.asc

Ryan Sleevi

unread,
Dec 1, 2017, 9:34:10 AM12/1/17
to Hubert Kario, Ryan Sleevi, dev-secur...@lists.mozilla.org
On Fri, Dec 1, 2017 at 7:34 AM, Hubert Kario <hka...@redhat.com> wrote:

> > It does feel like again the argument is The CA/EE should say 'I won't do
> X'
> > so that a client won't accept a signature if the CA does X, except it
> > doesn't change the security properties at all if the CA/EE does actually
> do
> > X, and the only places it does affect the security properties are either
> > already addressed (e.g. digitalSignature EKU) or themselves not protected
> > by the proposed mechanism.
>
> a). I think you're talking about Key Usage, not Extended Key Usage
> b). digitalSignature is a Key Usage, not Extended Key Usage bit
> c). Extended Key Usage has only one flag for use in TLS - serverAuth -
> which
> doesn't say anything about applicability of the key for SKE signature but
> not
> RSA key exchange
> d). show me the clients that actually honour the Key Usage flags for TLS
> in a
> way that prevents use of certificate with rsaEncryption SPKI for RSA key
> exchange
>
> so, yes, I'm afraid that you "must be missing something"
>

So while we started off in disagreement, it sounds like we have cycled back
to the view that RSA-PSS-params, if present, should be memcmp() able
(between SPKI and Signature and between Signature and Policy)

So the only thing that we're debating here is whether or not expressing
RSA-PSS in the SPKI (at all) is a good thing.

The view in favor of this is:
- Because CAs have made a complete mess of the existing rsaEncryption + KU
, clients don't check KU for rsaEncryption (Notably, they do check KU for
ECDSA because that's necessary to distinguish from ECDH)
- If a certificate is encoded with rsaEncryption, it's possible for a
server to use it both with TLS 1.2 RSA PKCS#1v1.5 ciphersuites and TLS 1.3
RSA-PSS ciphersuites
- If used with TLS 1.2 RSA PKCS#1v1.5 ciphersuites, it's possible that the
implementation may be buggy and subject to Bleichenbacher
- And expressing (via the SPKI OID) is an 'effective' way to prevent that
downgrade, which itself is only a risk if you're using a buggy
implementation.

Is that accurate?

To offset that risk, the goal is to use the SPKI algorithm as the signal to
'do not downgrade algorithms' (in this case, from PSS to PKCS#1v1.5).
This, despite the fact that SPKI parsing does not correctly work on any
platform
- Windows and NSS both apply DER-like BER parsers and do not strictly
reject (Postel's principle, despite Postel-was-wrong)
- macOS and iOS reject unrecognized SPKIs as weak keys
- Android supports PSS-signatures but a provider for decoding said public
keys is not provided by default

Are there any other arguments in favor of the PSS-SPKI not captured here?

I think that we agree on the substance of the PSS implementation - Must Be
Memcmp-able - makes many of the client complexity concerns. The deployment
complexity concerns are unavoidable - few clients support RSA-PSS in part
because of the disaster than is RFC 4055 - but that's a deployment concern,
not an implementation concern.

As it relates to what changes this means for NSS:
- Strictly enforcing (memcmp)ing the accepted parameters that NSS accepts
- That means NSS should NOT support arbitrary salt lengths, as doing so
adds flexibility at the cost of maintainability and security
- This resolves the DER-like BER decoding
- Strictly enforcing the KU for RSA-PSS (which it improperly enforces KUs
on keys today already, but hopefully RSA-PSS has not been ruined)

Is that correct?

Hubert Kario

unread,
Dec 1, 2017, 10:24:19 AM12/1/17
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
yes

> To offset that risk, the goal is to use the SPKI algorithm as the signal to
> 'do not downgrade algorithms' (in this case, from PSS to PKCS#1v1.5).
> This, despite the fact that SPKI parsing does not correctly work on any
> platform

rejecting what you do not understand (iOS, Android) is completely valid and
expected behaviour - e.g. NSS server still won't use (at all) RSA-PSS keys
imported from PKCS#12 file...

> - Windows and NSS both apply DER-like BER parsers and do not strictly
> reject (Postel's principle, despite Postel-was-wrong)

NSS did till very recently reject them, OpenSSL 1.0.2 still rejects them
(probably even 1.1.0), are you certain that Windows doesn't reject
certificates with SPKI with RSA-PSS OID? I mean, you _need_ additional code to
know that the public key for OID rsaEncryption and rsassaPss is formatted in
one and the same way... If you don't don't have that code, it looks like
completely different key type (think EdDSA or ECDSA for RSA-only
implementation)

> - macOS and iOS reject unrecognized SPKIs as weak keys
> - Android supports PSS-signatures but a provider for decoding said public
> keys is not provided by default
>
> Are there any other arguments in favor of the PSS-SPKI not captured here?

there is a remote chance that RSA-PSS with non-zero salts is strictly more
secure (unforgeable) than PKCS#1 v1.5, but for the sake of argument let's say
that what you said is the primary and only argument for RSA-PSS OID in SPKI

so no, there aren't other arguments

> I think that we agree on the substance of the PSS implementation - Must Be
> Memcmp-able - makes many of the client complexity concerns. The deployment
> complexity concerns are unavoidable - few clients support RSA-PSS in part
> because of the disaster than is RFC 4055 - but that's a deployment concern,
> not an implementation concern.
>
> As it relates to what changes this means for NSS:
> - Strictly enforcing (memcmp)ing the accepted parameters that NSS accepts
> - That means NSS should NOT support arbitrary salt lengths, as doing so
> adds flexibility at the cost of maintainability and security
> - This resolves the DER-like BER decoding
> - Strictly enforcing the KU for RSA-PSS (which it improperly enforces KUs
> on keys today already, but hopefully RSA-PSS has not been ruined)
>
> Is that correct?

yes, fine by me

and fine for NSS too, if that changes don't have to be implemented in next
month or two, but have to be implemented before NSS with final TLS 1.3 version
ships
signature.asc

Jakob Bohm

unread,
Dec 1, 2017, 10:33:49 AM12/1/17
to mozilla-dev-s...@lists.mozilla.org
Depending on the prevalence of non-public CAs (not listed in public
indexes) based on openssl (this would be a smallish company thing more
than a big enterprise thing), it might be useful to have *two* fixed
salt lengths for each combination of hash algorithm and RSA key length:

1. The salt length=hash length case previously suggested.

2. The salt length=largest permitted by RSA key length and hash length
(OpenSSL default).

Each of these could still be defined in a memcmp-able way.

>> - This resolves the DER-like BER decoding
>> - Strictly enforcing the KU for RSA-PSS (which it improperly enforces KUs
>> on keys today already, but hopefully RSA-PSS has not been ruined)
>>
>> Is that correct?
>
> yes, fine by me
>
> and fine for NSS too, if that changes don't have to be implemented in next
> month or two, but have to be implemented before NSS with final TLS 1.3 version
> ships
>


Ryan Sleevi

unread,
Dec 1, 2017, 11:07:33 AM12/1/17
to Jakob Bohm, mozilla-dev-security-policy
On Fri, Dec 1, 2017 at 10:33 AM, Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
>
> Depending on the prevalence of non-public CAs (not listed in public
> indexes) based on openssl (this would be a smallish company thing more
> than a big enterprise thing), it might be useful to have *two* fixed
> salt lengths for each combination of hash algorithm and RSA key length:
>
> 1. The salt length=hash length case previously suggested.
>
> 2. The salt length=largest permitted by RSA key length and hash length
> (OpenSSL default).
>
> Each of these could still be defined in a memcmp-able way.
>

Yes. You could add flexibility if there was both data to support it and
justification for the added complexity (passed on to all consumers).

I think there is a tremendously high bar to suggest such things are good,
and I don't think it's much useful to discuss what's possible without
having a position in favor (and data to support) or against (and data to
support).

Ryan Sleevi

unread,
Dec 1, 2017, 11:12:35 AM12/1/17
to Hubert Kario, Ryan Sleevi, dev-secur...@lists.mozilla.org
On Fri, Dec 1, 2017 at 10:23 AM, Hubert Kario <hka...@redhat.com> wrote:
>
> > - Windows and NSS both apply DER-like BER parsers and do not strictly
> > reject (Postel's principle, despite Postel-was-wrong)
>
> NSS did till very recently reject them, OpenSSL 1.0.2 still rejects them
> (probably even 1.1.0), are you certain that Windows doesn't reject
> certificates with SPKI with RSA-PSS OID? I mean, you _need_ additional
> code to
> know that the public key for OID rsaEncryption and rsassaPss is formatted
> in
> one and the same way... If you don't don't have that code, it looks like
> completely different key type (think EdDSA or ECDSA for RSA-only
> implementation)
>

Apologies for again not being very precise here (and now you can see why I
also like precision in policy)

Both Windows and (as of now) NSS accept RSS-PSS SPKIs, but they apply
rather liberal decoders that do not enforce the DER encoding rules - for
example, it's valid to supply an explicitly encoded SHA-1 hash, rather than
omit it, despite DER rules stating you don't encode the default value.

That's why I say they do not strictly reject - they demonstrate the very
problem I'm suggesting we avoid (by memcmp()ing), which is that validation
is hard and not consistently implemented.


> and fine for NSS too, if that changes don't have to be implemented in next
> month or two, but have to be implemented before NSS with final TLS 1.3
> version
> ships


Is there a reason not to disable RSA-PSS support in NSS for certificate
signatures until that time?

The argument in favor is that this would be a known-buggy implementation
(as already demonstrated by the parameter decoder)
The argument against is that, in addition to rejecting definitely-bad
certs, it would reject definitely-good certs, and thus would limit the
ability to test TLS1.3's experimental implementation.

Is that correct?

If that is, I'm not sure why a policy of disable-by-default and allow code
wanting to experiment with TLS1.3 (knowing it's experimental) from enabling
the experimental (but incomplete) PSS support.

Hubert Kario

unread,
Dec 1, 2017, 11:14:10 AM12/1/17
to dev-secur...@lists.mozilla.org, Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Friday, 1 December 2017 16:33:10 CET Jakob Bohm via dev-security-policy
wrote:
> On 01/12/2017 16:23, Hubert Kario wrote:
> Depending on the prevalence of non-public CAs (not listed in public
> indexes) based on openssl (this would be a smallish company thing more
> than a big enterprise thing), it might be useful to have *two* fixed
> salt lengths for each combination of hash algorithm and RSA key length:
>
> 1. The salt length=hash length case previously suggested.
>
> 2. The salt length=largest permitted by RSA key length and hash length
> (OpenSSL default).
>
> Each of these could still be defined in a memcmp-able way.

the problem is that then you need to multiply that by at least sensible RSA
key sizes (2048, 3072, 4096) and that makes it a long list rather quick

combined that with the fact that I haven't seen a single certificate like that
on an IPv4 accessible server, while I did see good 3 to 4 dozen that match
salt lengh == hash length, I think we can safely say that OpenSSL in its
default configuration is not commonly used with rsa-pss signatures in internal
CA deployments
signature.asc

Hubert Kario

unread,
Dec 1, 2017, 11:14:11 AM12/1/17
to dev-secur...@lists.mozilla.org, Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Friday, 1 December 2017 16:33:10 CET Jakob Bohm via dev-security-policy
wrote:
> On 01/12/2017 16:23, Hubert Kario wrote:
> Depending on the prevalence of non-public CAs (not listed in public
> indexes) based on openssl (this would be a smallish company thing more
> than a big enterprise thing), it might be useful to have *two* fixed
> salt lengths for each combination of hash algorithm and RSA key length:
>
> 1. The salt length=hash length case previously suggested.
>
> 2. The salt length=largest permitted by RSA key length and hash length
> (OpenSSL default).
>
> Each of these could still be defined in a memcmp-able way.

the problem is that then you need to multiply that by at least sensible RSA
key sizes (2048, 3072, 4096) and that makes it a long list rather quick

combined that with the fact that I haven't seen a single certificate like that
on an IPv4 accessible server, while I did see good 3 to 4 dozen that match
salt lengh == hash length, I think we can safely say that OpenSSL in its
default configuration is not commonly used with rsa-pss signatures in internal
CA deployments

signature.asc

Hubert Kario

unread,
Dec 1, 2017, 11:20:51 AM12/1/17
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
On Friday, 1 December 2017 17:11:56 CET Ryan Sleevi wrote:
> On Fri, Dec 1, 2017 at 10:23 AM, Hubert Kario <hka...@redhat.com> wrote:
> > and fine for NSS too, if that changes don't have to be implemented in next
> > month or two, but have to be implemented before NSS with final TLS 1.3
> > version
> > ships
>
> Is there a reason not to disable RSA-PSS support in NSS for certificate
> signatures until that time?

yes, disabling it without disabling RSA-PSS support in TLS (and thus TLS 1.3
in its entirety) is non-trivial and not possible with current code base

> The argument in favor is that this would be a known-buggy implementation
> (as already demonstrated by the parameter decoder)
> The argument against is that, in addition to rejecting definitely-bad
> certs, it would reject definitely-good certs, and thus would limit the
> ability to test TLS1.3's experimental implementation.

I don't think NSS does reject good certs, can you provide example of such a
certificate?
signature.asc

Jakob Bohm

unread,
Dec 1, 2017, 12:34:50 PM12/1/17
to mozilla-dev-s...@lists.mozilla.org
I am saying someone with the resources should check if there is such
data.

If the data shows that OpenSSL-style salt lengths are common in closed
networks, the complexity would consist of:

1. Having two (rather than one) valid value per hash algorithm.

2. The second valid value (OpenSSL default) needs to be computed from
the RSA key length (it's not a fixed value, though test vectors can
be given for common RSA key lengths). In practice there would be
one value (except 2 bytes) for salt lengths < 65536 bits, one for salt
lengths >= 65536 bits (with 3 varying bytes), so a full DER encoder is
not even needed, though most X.509 libraries will already contain a
suitable DER encoder. Salt lengths < 256 bits are already banned by
other parts of policy, salt lengths >= 16Mbit are unrealistic.

Ryan Sleevi

unread,
Dec 1, 2017, 1:36:52 PM12/1/17
to Hubert Kario, Ryan Sleevi, dev-secur...@lists.mozilla.org
On Fri, Dec 1, 2017 at 11:20 AM, Hubert Kario <hka...@redhat.com> wrote:

> On Friday, 1 December 2017 17:11:56 CET Ryan Sleevi wrote:
> > On Fri, Dec 1, 2017 at 10:23 AM, Hubert Kario <hka...@redhat.com> wrote:
> > > and fine for NSS too, if that changes don't have to be implemented in
> next
> > > month or two, but have to be implemented before NSS with final TLS 1.3
> > > version
> > > ships
> >
> > Is there a reason not to disable RSA-PSS support in NSS for certificate
> > signatures until that time?
>
> yes, disabling it without disabling RSA-PSS support in TLS (and thus TLS
> 1.3
> in its entirety) is non-trivial and not possible with current code base
>
> > The argument in favor is that this would be a known-buggy implementation
> > (as already demonstrated by the parameter decoder)
> > The argument against is that, in addition to rejecting definitely-bad
> > certs, it would reject definitely-good certs, and thus would limit the
> > ability to test TLS1.3's experimental implementation.
>
> I don't think NSS does reject good certs, can you provide example of such a
> certificate?


We both said the same thing :) That is, the reason to not disable / the
argument against disabling is it'd disable TLS1.3 - unless someone enabled
(RSA-PSS & TLS1.3)

That said, considering that TLS 1.3 is not "stable" in NSS (after all, it's
still a draft), I'm not sure how unreasonable it would be to say that
RSA-PSS should only be enabled if the caller enabled TLS1.3, but given
Firefox enabling TLS1.3, the value itself may be minimal-to-negative, since
it'd always enable RSA-PSS anyways.

Ryan Sleevi

unread,
Dec 1, 2017, 1:39:09 PM12/1/17
to Jakob Bohm, mozilla-dev-security-policy
On Fri, Dec 1, 2017 at 12:34 PM, Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 01/12/2017 17:06, Ryan Sleevi wrote:
>
> I am saying someone with the resources should check if there is such
> data.
>

I'm not disagreeing with you that's a potential step.

I'm saying that unless you're stepping up with that data, then describing
how and saying someone should do it - without data to support its necessity
or lack thereof - isn't as useful.

That is, you've described a possible hypothetical scenario. You've
described how it could be measured. We could rathole into the discussions
about the challenges in such measurement (and the time to gather such
data), but such a discussion would not be useful without some initial sense
of how realistic that hypothetical is. We know, from the facts of the
matter, that the realistic nature of that hypothetical is low, and
furthermore, given the facts, the relative impact of said hypothetical is
low. So I don't think it's necessary useful to discuss what we could do to
support an unmeasured hypothetical whose prevalence can be empirically
deduced to be low apriori.
0 new messages