Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Sanctions short of distrust

2,324 views
Skip to first unread message

Nick Lamb

unread,
Aug 31, 2016, 3:43:50 PM8/31/16
to mozilla-dev-s...@lists.mozilla.org
A recurring theme of m.d.s.policy is that a CA behaves in a way that falls short, sometimes far short of the reasonable expectations of relying parties and yet in the end Mozilla doesn't end up distrusting that CA because of the direct impact on relying parties, the indirect impact on subscribers, and then the indirect impact on Mozilla itself.

This suggests the need for some options short of distrust which can be deployed instead, but Mozilla does not seem to have any. If in fact it already does, this would be a great place to say what they are and discuss why they haven't been able to be used in recent cases.

I have spent some time thinking about this, but I am only one person, and one with relatively little in-depth knowledge of the Mozilla project, so I think I will lay out the options I've thought about and invite feedback though particularly any alternative suggestions.

1. Implement "Require SCTs" for problematic CAs. Notify the CA they are obliged to CT log all certificates, inform subscribers etc. or their subscriber's certificates will suddenly be invalid in Firefox from some future date.

This option would be a software change (to Firefox at least) making it able to reject certificates which chain to a particular root CA unless it is also presented with valid SCTs proving the certificate was CT logged.

GOOD: Any certificates which aren't being logged are now much less useful to bad guys

GOOD: Future bad certificates will more likely be seen in the CT logs and can be acted on directly by anyone.

GOOD: Drives further uptake/ interest in CT logging which improves overall quality of the web PKI ?

GOOD: Requires CA to actively contact subscribers, inform them of the issue, ensuring subscribers know of problematic behaviour by the CA.

BAD: Requires non-trivial software development effort (unclear how much of this work is done in #944175, #1284256)

BAD: Does not directly protect downstream users of NSS trust store.

RISK: Hard to enforce notifications from CA to their subscriber.

RISK: May create incentive for less security conscious subscribers to move to a CA that doesn't require SCTs, even though this increases risk for them and for their relying parties.


2. Create "at risk" category for problematic CAs which lasts some finite period of time (or a period to be set in each case). Notify the CA they are obliged to warn their subscribers of this status or leave the Mozilla programme immediately. Publicly announce "at risk" status and drive as PR.

This option is purely a policy/ procedure change. Mozilla already reserves the right to distrust CAs. This would introduce a state in which the CA had been publicly warned that their behaviour fell short and must tell their subscribers that they are at risk of being distrusted.

GOOD: Requires CA to actively contact subscribers, inform them of the issue, ensuring subscribers know of problematic behaviour by the CA.

GOOD: Provides a clear stepping stone to distrust.

GOOD: Might drive subscribers away from problematic CAs to ones with a better track record due to perceived risk of subsequent distrust.

BAD: Doesn't directly DO anything to protect Mozilla's relying parties, purely a paperwork measure.

RISK: Hard to enforce notifications from CA to their subscriber.



3. Split NSS trust store into two or more categories based on degree of trustworthiness. Maybe present a Firefox pref to pick "secure" vs "compatible"

This option arguably punts on the problem, leaving it for end users or local administrators to decide whether the increased rate of trust failures is an acceptable or even desirable outcome from distrusting more CAs.

GOOD: Might drive subscribers away from problematic CAs to ones with a better track record to increase their chance of acceptance

GOOD: Gives relying parties something specific they can do short of making all their own trust decisions by hand.

BAD: Reduces perceived value of basic inclusion in the Mozilla root programme.

RISK: Press may cover this as Mozilla abandoning their role

RISK: Might need software development to make this behaviour useful to ordinary end users


Finally, I would like to mention, though I expect it to be shot down, a much more radical way forward. RP audits. Relying Party audits.

Today audits are performed by a third party, selected and paid directly by the CA, largely for the benefit of the CA and secondarily for its subscribers. The auditors rarely find anything interesting (even when we know there was much to be found) and the result is of very little value to the people we care about in m.d.s.policy, the relying parties.

I believe audits should be conducted on behalf of the relying parties, with auditors selected by them though still paid for ultimately by the CAs. The auditor's interim and final reports would go to relying parties, perhaps via trust stores like Mozilla or Google or perhaps via a new third party representing the relying parties (which today means basically the world population). The auditor's directions and goals would be set by the relying parties, for their benefit, rather than merely with the intention to produce an useless piece of paper and check the box.

CAs would be under no obligation to subject themselves to RP audits, but of course if this were popularized it wouldn't make sense for a trust store to accept CAs that don't since they impose an incalculably greater risk on the relying parties.

My model in thinking about this problem has been the Paris MOU. Historically by treaty the enforcement of most laws of the sea was left to flag states, sovereign entities which registered civilian vessels and permitted them to sail under that country's flag. Late last century Europe grew increasingly dissatisfied with the result: "flags of convenience" aka "open registers" in which a small country offered cheap registration to foreign operated vessels often with little or no actual enforcement. Dangerously under-maintained or poorly staffed vessels flagged by distant island nations threatened Europe's vital sea ports with pollution, fatal accidents and loss of cargo.

So in Paris a Memorandum of Understanding was signed in which European port states agreed they'd all impose a new regime, vessels entering their ports could be inspected by officials working for the state the port was in, regardless of their flag. Vessels which failed inspection would be detained until they were improved, despite the cost to the port involved. Statistical models, rather than gut instinct, inform the decisions of which vessels are inspected and today this regime is so successful that has been replicated around the world.

Hanno Böck

unread,
Sep 1, 2016, 3:07:48 AM9/1/16
to dev-secur...@lists.mozilla.org
On Wed, 31 Aug 2016 12:43:38 -0700 (PDT)
Nick Lamb <tiala...@gmail.com> wrote:

> 1. Implement "Require SCTs" for problematic CAs. Notify the CA they
> are obliged to CT log all certificates, inform subscribers etc. or
> their subscriber's certificates will suddenly be invalid in Firefox
> from some future date.

I think this is generally a very good thing, because CT has uncovered a
lot of CA-badness in the past.
I'm happy to see that Wosign is going down that route (not sure if
someone forced them to do or if they did this voluntarily, but it seems
like the right step).

I'd like to propose another feature that one could ask "problematic"
CAs to implement: CAA.
It's a relatively simple thing: A domain owner has a DNS record that
says which CAs he wants to be allowed to issue certs.

Good thing: Can be easily tested by others whether a CA implements it
and it may reduce misissuances.

I'm inclined to say every CA should implement CAA, but it seems last
time this was discussed in the CA/Browser-Forum they agreed to make
this a SHOULD, not a MUST.

--
Hanno Böck
https://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: FE73757FA60E4E21B937579FA5880072BBB51E42

Ryan Sleevi

unread,
Sep 1, 2016, 3:30:40 AM9/1/16
to mozilla-dev-s...@lists.mozilla.org
On Thursday, September 1, 2016 at 12:07:48 AM UTC-7, Hanno Böck wrote:
> Good thing: Can be easily tested by others whether a CA implements it
> and it may reduce misissuances.
>
> I'm inclined to say every CA should implement CAA, but it seems last
> time this was discussed in the CA/Browser-Forum they agreed to make
> this a SHOULD, not a MUST.

There's still concern about how the practical implementation would work. That's the curse of some WGs - due to a variety of externalities, rough consensus may be formed, but running code - especially operational - leads to practical challenges. We see this with RFC 6962 (and RFC 6962-bis), we saw this with HPKP, and I would argue, we see this with CAA as well.

What was discussed in the Forum is the lack of defined policies for what it means to "implement CAA". For example, if Trustwave were to see a CAA record for "symantec.com", could it issue the cert? Why or why not? To what forms does the CAA record apply with regards to issuance - for example, if a CA were to go in person, sit down in front of the CTO/COO, verify their passport, verify with their lawyers that the CTO was duly authorized, then even if the CAA record said otherwise, could they issue then? During the Forum discussion, it was clear that Symantec's representative had some confusion about CAA, which similarly suggests that we will likely see the same implementation issues in CAs that have lead to the many RFC 5280 violations, but as CAA is designed as an issuer-side check, at time of issuance, there will be no way for the community to evaluate such compliance.

To be clear: I'm an ardent supporter of CAA. I, ideally, want to see CAs leading the way in thinking through the issues related to CAA, and the risks their businesses may face, and how best to address them. I'd like to avoid policy by fiat if possible, but support it if that's what it takes to solve the current first mover problem.

But I think that if we do entertain this option, and if it does come to needing root store fiat, there definitely needs to be a clear consensus on the appropriate policies for implementation, and it may take some delicate hand-holding of CAs (... with ample publicly available test cases) to help them evaluate their issuance systems.

Kurt Roeckx

unread,
Sep 1, 2016, 4:21:05 AM9/1/16
to mozilla-dev-s...@lists.mozilla.org
Hi Nick,

I want to thank you for bringing this up, because we always seem to have
the same kind of discussions when something happened. Ryan's mail has a
bunch of other suggestions for what we can do.

> 1. Implement "Require SCTs" for problematic CAs.

Is there a reason we don't require publishing everything in CT logs? I
think the publishing in the CT log can be relative simple, SCTs in the
certificate might require more work. We should probably push that
everybody at least has the ability to embed SCTs.

> 2. Create "at risk" category for problematic CAs which lasts some finite period of time

Could we maybe combine this with UI changes?

> Finally, I would like to mention, though I expect it to be shot down, a much more radical way forward. RP audits. Relying Party audits.

I think an alternative is that we change the requirement of what the
current auditors all have to check. I understand that the reason they
don't check more is that it would require more time (and money) to do
the yearly audits.

It might also be useful that we have requirements for what things should
be in the audit report.


Kurt

Jakob Bohm

unread,
Sep 2, 2016, 1:43:13 AM9/2/16
to mozilla-dev-s...@lists.mozilla.org
On 01/09/2016 09:30, Ryan Sleevi wrote:
> On Thursday, September 1, 2016 at 12:07:48 AM UTC-7, Hanno Böck wrote:
>> Good thing: Can be easily tested by others whether a CA implements it
>> and it may reduce misissuances.
>>
>> I'm inclined to say every CA should implement CAA, but it seems last
>> time this was discussed in the CA/Browser-Forum they agreed to make
>> this a SHOULD, not a MUST.
>
> There's still concern about how the practical implementation would work. That's the curse of some WGs - due to a variety of externalities, rough consensus may be formed, but running code - especially operational - leads to practical challenges. We see this with RFC 6962 (and RFC 6962-bis), we saw this with HPKP, and I would argue, we see this with CAA as well.
>
> What was discussed in the Forum is the lack of defined policies for what it means to "implement CAA". For example, if Trustwave were to see a CAA record for "symantec.com", could it issue the cert? Why or why not? To what forms does the CAA record apply with regards to issuance - for example, if a CA were to go in person, sit down in front of the CTO/COO, verify their passport, verify with their lawyers that the CTO was duly authorized, then even if the CAA record said otherwise, could they issue then? During the Forum discussion, it was clear that Symantec's representative had some confusion about CAA, which similarly suggests that we will likely see the same implementation issues in CAs that have lead to the many RFC 5280 violations, but as CAA is designed as an issuer-side check, at time of issuance, there will be no way for the community to evaluate such compliance.
>

I would say, the CAA check should stop them even before they go to that
CTO meeting. A CAA record would be an *absolute* ban, and if the CTO
really want the certificate and really have the authority, they also
have the authority (in a very direct way) to have their own CAA record
changed to allow the certificate before ordering.

Of cause a CA could have a "grace" policy to allow the customer to fix
its CAA record then resume the process without paying the full fee
again, or to check the CAA record automatically before requesting
payment for a more expensive certificate type (such as EV). Of cause
in either case they must recheck for a CAA record late in the issuance
process, to detect if a contrary CAA record was created while the
requestor fiddled with their credit card or while the manual vetting
was in progress (This could happen if something bad is happening
in/around the customer organization, such as that CTO getting fired for
cause two minutes after the vetting team left).

> To be clear: I'm an ardent supporter of CAA. I, ideally, want to see CAs leading the way in thinking through the issues related to CAA, and the risks their businesses may face, and how best to address them. I'd like to avoid policy by fiat if possible, but support it if that's what it takes to solve the current first mover problem.
>

The first mover advantage would be quicker rejection of some bad
requests, saving the manpower to even begin asking for paper documents
etc. Consider the economic advantage of getting $$$ from crooks and
just rejecting them without doing actual work. Of cause to reduce
support calls, rejection messages should be very clear such as:

example> Your certificate request #12345 for domain example.org was
example> rejected because the operators of example.org have published
example> the following CAA record, explicitly prohibiting any CA except
example> example.com and example.edu from issuing certificates for
example> example.org. Therefore we (the example.net CA) are prohibited
example> from issuing the requested certificate.
example>
example> example.org. 7300 IN TXT "WHATEVER example.com example.edu"
example> (retrieved on 2017-02-29T01:02:03 UTC from your official
example> DNS server at ns1.example.org, this record was not DNSSEC
example> signed).
example>
example> Of cause if you are genuinely the operator of example.org and
example> actually want to permit example.net to issue a certificate
example> such as the one just rejected, you could change your CAA
example> record to permit it, wait at least 26 hours, 1 minute and 20
example> seconds (the 7300 seconds stated in your old CAA record plus
example> 24 hours), and then place a new order.
example>
example> For guidance on how to set up a correct CAA record please
example> refer to our knowledge base article at
example> https://support.example.net/kb/42/

(Notice how every item in the example message was trivially computed
from data used in the rejection calculation, restated in plain language
in accordance with how the CA scripts interpreted the data to cause
that rejection).

(Since this is a hypothetical example, the details of this message does
not reflect the CAA specification requirements regarding how CAs are
named in CAA records, the relationship with DNSSEC etc.)

> But I think that if we do entertain this option, and if it does come to needing root store fiat, there definitely needs to be a clear consensus on the appropriate policies for implementation, and it may take some delicate hand-holding of CAs (... with ample publicly available test cases) to help them evaluate their issuance systems.
>

As a testbed/handholding service, mozilla.org could set up some test
sub-domains with various CAA records that should/should not allow
each of the mozilla-trusted CAs CAs to issue certificates for those,
then work with the CA/B forum to do zero cost/repaid test requests to
check that those are rejected/accepted as appropriate.

For example the subdomain symantecandcomodotest.mozilla.org could have
a CAA record allowing only Symantec and Comodo to issue certificates
for it, providing a test case for Symantec and Comodo (that they accept
CAA records listing themselves and a competitor as permitted, typical
case for a domain switching CAs) and for everyone else (that they
reject CAA records listing only other CAs). CAs can then test their
new scripts against these domains (outside the production environment,
so no trusted certificate issued regardless of bugs).

Similarly, as a public audit, someone could routinely set up throw-away
domains with CAA records, then request banned certificates to name and
shame bad issuance if actually issued (A "Mystery shopper" test
strategy). Of cause this should involve some checks against bad faith
testing (such as not actually publishing those test CAA records until
after issuance).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Gervase Markham

unread,
Sep 2, 2016, 6:19:49 AM9/2/16
to Nick Lamb
On 31/08/16 20:43, Nick Lamb wrote:
> This suggests the need for some options short of distrust which can
> be deployed instead, but Mozilla does not seem to have any. If in
> fact it already does, this would be a great place to say what they
> are and discuss why they haven't been able to be used in recent
> cases.

Have you considered what was done for CNNIC? In that case, we distrusted
all certificates issued after a certain time. We used a whitelist for
determining this, but it would be possible to use the notBefore date in
the certificate. A CA could dodge this by backdating, but if the CA were
also committed to putting all its certs in to CT, then the backdating
would be noticeable.

> 1. Implement "Require SCTs" for problematic CAs. Notify the CA they
> are obliged to CT log all certificates, inform subscribers etc. or
> their subscriber's certificates will suddenly be invalid in Firefox
> from some future date.

This is not currently possible in Firefox, as Firefox does not have the
ability to check SCTs. We hope to have that ability soon.

> 2. Create "at risk" category for problematic CAs which lasts some
> finite period of time (or a period to be set in each case). Notify
> the CA they are obliged to warn their subscribers of this status or
> leave the Mozilla programme immediately. Publicly announce "at risk"
> status and drive as PR.

One issue to consider with this option would be that reputational damage
is harder to quantify and control than a technical measure, which might
be said to increase the risk that the action would be disproportionate.

> 3. Split NSS trust store into two or more categories based on degree
> of trustworthiness. Maybe present a Firefox pref to pick "secure" vs
> "compatible"

Non-starter, I'm afraid. We are not loading this problem on to users.

> Finally, I would like to mention, though I expect it to be shot down,
> a much more radical way forward. RP audits. Relying Party audits.

Some issues to consider with this approach would be:

* How does the money to pay for such audits flow from the CA to the
auditor, and through whom?

* Who chooses the auditors?

* How do you make sure they remain independent when their funding is
(even indirectly) from the CAs?

* How do you deal with confidentiality issues? CAs have some things they
legitimately wish to keep confidential. And yet such an auditor would
need full access to all their infra and business processes.

* Is it a problem that adding additional costs to becoming a CA
discourages new and possibly innovative companies from entering the market?

Gerv

Jakob Bohm

unread,
Sep 2, 2016, 12:34:22 PM9/2/16
to mozilla-dev-s...@lists.mozilla.org
On 02/09/2016 12:19, Gervase Markham wrote:
> On 31/08/16 20:43, Nick Lamb wrote:
>> This suggests the need for some options short of distrust which can
>> be deployed instead, but Mozilla does not seem to have any. If in
>> fact it already does, this would be a great place to say what they
>> are and discuss why they haven't been able to be used in recent
>> cases.
>
> Have you considered what was done for CNNIC? In that case, we distrusted
> all certificates issued after a certain time. We used a whitelist for
> determining this, but it would be possible to use the notBefore date in
> the certificate. A CA could dodge this by backdating, but if the CA were
> also committed to putting all its certs in to CT, then the backdating
> would be noticeable.
>

If the fraudulently backdated certificates are done (technically)
right, they would have serial numbers consistent with their old date
and would have non-CT logging also consistent with that old date.

Thus the only way to detect that those certificates were not in fact
issued before the cut off date would be to check against an independent
white list of actual pre-cut-off certificates. (Longer recipe at end
of this post).

>> 1. Implement "Require SCTs" for problematic CAs. Notify the CA they
>> are obliged to CT log all certificates, inform subscribers etc. or
>> their subscriber's certificates will suddenly be invalid in Firefox
>> from some future date.
>
> This is not currently possible in Firefox, as Firefox does not have the
> ability to check SCTs. We hope to have that ability soon.
>
>> 2. Create "at risk" category for problematic CAs which lasts some
>> finite period of time (or a period to be set in each case). Notify
>> the CA they are obliged to warn their subscribers of this status or
>> leave the Mozilla programme immediately. Publicly announce "at risk"
>> status and drive as PR.
>
> One issue to consider with this option would be that reputational damage
> is harder to quantify and control than a technical measure, which might
> be said to increase the risk that the action would be disproportionate.
>
>> 3. Split NSS trust store into two or more categories based on degree
>> of trustworthiness. Maybe present a Firefox pref to pick "secure" vs
>> "compatible"
>
> Non-starter, I'm afraid. We are not loading this problem on to users.
>
>> Finally, I would like to mention, though I expect it to be shot down,
>> a much more radical way forward. RP audits. Relying Party audits.
>
> Some issues to consider with this approach would be:
>
> * How does the money to pay for such audits flow from the CA to the
> auditor, and through whom?

CA must deposit the standard audit fee (non-negotiable, no race to the
bottom) in a standard bank escrow account releasable by the body that
chooses the auditors.

>
> * Who chooses the auditors?

Obviously not the CA, and since there is no appropriate neutral
democratic body to do this (the UN GA would be overdoing it, ITU and
ICANN are having their own trust issues) maybe an ad hoc body
consisting of the CA/B forum plus extra representatives from relevant
groups such as the EFF, Transparency International, whatever
international organization acts as a union of Auditors.

>
> * How do you make sure they remain independent when their funding is
> (even indirectly) from the CAs?

Their fee structure and election is done by someone other than the CA,
who also hires and fires them on an annual basis, so they would be
fiscally beholden to that other body, not the CA. For example the
fixed price for a 2017 audit could be $xxxxxx per site per CA
organisation + $yyyyy per root cert + $zzzzz per intermediary cert +
$ccccc per million end certs. Amount to be deposited in an independent
bank at least 2 weeks before the audit is due to begin and releasable
only by signatures from the appointing body.

>
> * How do you deal with confidentiality issues? CAs have some things they
> legitimately wish to keep confidential. And yet such an auditor would
> need full access to all their infra and business processes.

This is standard for properly certified auditors to handle, as long as
they are not completely chosen by a competitor or spy agency. This is
standard auditing practice with appropriate confidentiality obligations
often codified by law.

>
> * Is it a problem that adding additional costs to becoming a CA
> discourages new and possibly innovative companies from entering the market?
>

This cost (though maybe higher than the fee for a rubber stamp auditor)
would replace, not add to, the cost of a CA hired compliance auditor.

+++++

If a list of grandfathered WoSign (or some other big failed CA) issued
certs is too big to download to every client (in the browser
installer or in any other way), one practical solution could be this:
(Sorry, this is going to be a long recipe):

1. Copy all the grandfathered certificates in DER form to a secure
database at Mozilla HQ, a Beijing government agency or some other
strong location.

2. Generate a Merkle hash tree (using SHA-512 or better) of the
certificates in this original database and publish the top level hash
far and wide for public checking that no further changes will be made
by the custodian of this database. Methods to access the database
and check against the published Merkle tree can be done later, adding
or removing certificates after the top hash was published is
cryptographically "impossible".

3. Compute a more size-friendly SHA-256 of each DER encoded certificate
to remain trusted. Sort this list of SHA-256 values
lexicographically by their byte values.

4. Split the list according to the first m bits, the number (m) of bits
should be chosen so: The shortest partial list contains at least
1,0000 (ten thousand) certificates, the largest less than 10,0000 if
possible.

5. Place each list part in its own signed .jar file (to get the
compression and reuse the jar signature checking code) and name it
after the leading bits that are common for that file. Also create
.jar files with empty (0 byte) lists for any values of the leading m
bits that have no certificates. For example WOSIG-A1C.jar would
contain the sole file WOSIG-A1C.dat which is simply a sorted list of
binary SHA-256 values whose first 10 bits are all 1010 0001 11. It
would also contain some kind of expiration date indication.

6. Put these jar files on one or more https CDNs for public download,
for example, the URL for WOSIG-A1C.jar could be
https://foo.mozcdn.org/certwhit/WOSIG-A1C.jar

7. In the relying party configuration data, add an internal list of
certificates that have had this treatment, the hash alg (SHA-256),
the split bit count (e.g. m=10 bits), the authorized .jar signer
identity and the base URL (e.g.
https://foo.mozcdn.org/certwhit/WOSIG-).

8. In the relying party software (NSS) add code to interpret the above
list and download the fraction .jar file when an affected certificate
is seen, then check if the received certificate is in the list.
Because each .jar download covers at least 1,0000 unrelated
certificates selected pseudorandomly, it reveals little of privacy
concern (unless an invalid certificate from a rarely downloaded
bundle is injected into a connection while monitoring who downloads
that file, hence the use of a https CDN).

9. As TLS certificates are replaced, revoked etc. the .jar files are
regenerated and signed again. E-mail and object signing
certificates are not removed from the lists because those signed
e-mails need to remain checkable at a later time, regardless if the
original signer cooperates or tries to repudiate his own signature.
Once the last TLS certificate is gone from the list, the expiry
period of the .jar files is increased significantly, as there would
be few if any future changes.

Nick Lamb

unread,
Sep 2, 2016, 12:41:54 PM9/2/16
to mozilla-dev-s...@lists.mozilla.org
Thanks for your feedback Gerv,

On Friday, 2 September 2016 11:19:49 UTC+1, Gervase Markham wrote:
> Have you considered what was done for CNNIC? In that case, we distrusted
> all certificates issued after a certain time. We used a whitelist for
> determining this, but it would be possible to use the notBefore date in
> the certificate. A CA could dodge this by backdating, but if the CA were
> also committed to putting all its certs in to CT, then the backdating
> would be noticeable.

That's a good point, I knew of the CNNIC outcome but didn't add it to my list. It has the obvious GOOD factor that we know Mozilla can successfully do it.

I suspect that a whitelist will almost always prove necessary. Even the suspicion of backdating happening undermines the value of this sanction.

> > 2. Create "at risk" category for problematic CAs

> One issue to consider with this option would be that reputational damage
> is harder to quantify and control than a technical measure, which might
> be said to increase the risk that the action would be disproportionate.

The intention of my "leave the programme" option was to mitigate this risk. If the CA thinks it would be worse to have reputational damage from the "at risk" declaration than be distrusted, they're free to leave Mozilla's trust store programme entirely instead.

> > Finally, I would like to mention, though I expect it to be shot down,
> > a much more radical way forward. RP audits. Relying Party audits.
> Some issues to consider with this approach would be:
>
> * How does the money to pay for such audits flow from the CA to the
> auditor, and through whom?

Collected by an arms length contractor on a volume basis, see e.g. how Advertising Standards Agency is funded in UK. This gets easier with CT because volume stops being speculative. The exact funding structure (e.g. logarithmic on certs issued? thresholds for validity months x number of certs?) would be chosen by the arms length contractor to meet an overall funding goal.

> * Who chooses the auditors?

Major Trust Stores like Mozilla on behalf of their relying party users. You could build a big clumsy bureaucracy around this but I think the big 3-4 could just settle it among themselves e.g. cut the list of CAs up and take some each, or rotate responsibility.

> * How do you make sure they remain independent when their funding is
> (even indirectly) from the CAs?

Pooling effect of funding this way dilutes corrupting influence of money. Reporting to trust stores rather than to CA reminds auditors who they're representing. Keep in mind that today the funding is indirectly from subscribers, but auditors don't usually meet one, so why think about it?

> * How do you deal with confidentiality issues? CAs have some things they
> legitimately wish to keep confidential. And yet such an auditor would
> need full access to all their infra and business processes.

It may be necessary to carefully draft a standard agreement for this purpose. I've never seen an auditor be confused about whether it's appropriate to disclose things like passwords, exact locations of secure facilities, identities of key employees etc. This is not their first rodeo. In my experience too often the "legitimate wishes" of an audit subject involve not disclosing the very sort of things the audit is in fact intended to unearth, such as unmitigated risks and poorly conceived processes. So "too bad" has to be the answer in those cases.

> * Is it a problem that adding additional costs to becoming a CA
> discourages new and possibly innovative companies from entering the market?

Ultimately the goal would be that RP audits would be either cost neutral (assuming the eventual elimination of the current less than ideal audit system) or only slightly more expensive (reflecting costs from choosing higher quality auditors or from more thorough conduct of the audits)

If we want to make it cheaper, my understanding is that the _utterly_ useless insurance requirement is still there. If one RP audit finds one bug before it bites us in the real world that was already a better investment than all the insurance ever purchased by CAs in the history of the web PKI as far as I know.

John Nagle

unread,
Sep 2, 2016, 3:15:45 PM9/2/16
to dev-secur...@lists.mozilla.org
> September 2016 11:19:49 UTC+1, Gervase Markham wrote:
>>> Have you considered what was done for CNNIC? In that case, we
>>> distrusted all certificates issued after a certain time. We used
>>> a whitelist for determining this, but it would be possible to use
>>> the notBefore date in the certificate. A CA could dodge this by
>>> backdating, but if the CA were also committed to putting all its
>>> certs in to CT, then the backdating would be noticeable.
>> Jakob Bohm (?) wrote:
> That's a good point, I knew of the CNNIC outcome but didn't add it to
> my list. It has the obvious GOOD factor that we know Mozilla can
> successfully do it.
>
> I suspect that a whitelist will almost always prove necessary. Even
> the suspicion of backdating happening undermines the value of this
> sanction.

This is approaching a workable technical solution.

The certificate transparency system is already well defined.

Firefox could have flags associated with root certificates, perhaps
as follows:

1. For certs under this root cert, always check
certificate revocation list. (presumably via OCSP).
Fail if revoked.

2. For certs under this root cert, always check
CA's certificate transparency server. Fail
if not found.

3. For certs under this root cert, always check
Mozilla-run whitelist in addition to other
checks. This would probably be in the form
of a CT server which tracked the CA's CT
server, but from which certificates could
be deleted if necessary.

This provides the requested mechanism short of distrust. Only
#3 is new; Google Chrome already does #1 and #2. #3 would
be turned on in case of problems at a CA.

Google's statements in 2015 indicated that it was their intention
to do something quite similar in Chrome:

https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxjZXJ0aWZpY2F0ZXRyYW5zcGFyZW5jeXxneDoyZGU5Yjg1MmVjNzc5NjQz

Much of that has already been done.

CT support in Firefox was proposed in 2013 and a patch was generated,
but seems to be languishing. Last update was in January 2016.
The problem appears to be policy, not technology. See:

https://bugzilla.mozilla.org/show_bug.cgi?id=944175


John Nagle
SiteTruth


Patrick Figel

unread,
Sep 2, 2016, 4:04:25 PM9/2/16
to na...@animats.com, dev-secur...@lists.mozilla.org
On 02/09/16 21:14, John Nagle wrote:
> 2. For certs under this root cert, always check
> CA's certificate transparency server. Fail
> if not found.

To my knowledge, CT does not have any kind of online check mechanism.
SCTs can be embedded in the certificate (at the time of issuance),
delivered as part of the TLS handshake or via OCSP stapling.

In practice that means certificates will either have to be re-issued, or
website operators need to modify their server software and configuration
(not many sites currently deliver SCTs). In terms of real-world impact,
you probably could just as well pull the root completely.

I believe there are two possible solutions if CT enforcement is what the
community decides on:

1. Enforce CT only after a certain date, after which WoSign will need
to embed qualified SCTs. This check can be bypassed if the CA
backdates certificates (which is problematic, given the history of
backdating certificates in this particular case.)

2. Verify that the certificates either have a qualified SCT *or* are
explicitly white-listed as certificates that have been issued prior
to WoSign implementing CT. There are a number of possible
implementations for this (Google's Safe Browsing, etc.), but they'd
all require a non-trivial amount of development work.

John Nagle

unread,
Sep 2, 2016, 6:48:41 PM9/2/16
to dev-secur...@lists.mozilla.org
On 09/02/2016 01:04 PM, Patrick Figel wrote:
> On 02/09/16 21:14, John Nagle wrote:
>> 2. For certs under this root cert, always check CA's certificate
>> transparency server. Fail if not found.
>
> To my knowledge, CT does not have any kind of online check
> mechanism. SCTs can be embedded in the certificate (at the time of
> issuance), delivered as part of the TLS handshake or via OCSP
> stapling.

You're supposed to be able to check if a cert is known by
querying an OCSP responder. OCSP stapling is just a faster way
to do that. Commercial OCSP responders are available. See

https://www.ejbca.org/docs/architecture-ocsp.html

https://technet.microsoft.com/en-us/library/cc770413(v=ws.10).aspx

and there's an open source responder:


https://www.nexusgroup.com/globalassets/media/documents/productsheet_eng/nexus_ps_ocsp-responder_en.pdf

What I'm suggesting is that mandatory external OCSP checking
against a Mozilla-operated server be enabled on a per-root-cert basis.
Mozilla's server would get its data from the CA's CT information, and,
after checking for problems, and removing any questionable certs,
would make it available on an OCSP server. Firefox would check
that server if the root cert was flagged for it. For CAs with problems,
this would give Mozilla fine-grained sanction control.

John Nagle

Matt Palmer

unread,
Sep 2, 2016, 7:15:49 PM9/2/16
to dev-secur...@lists.mozilla.org
On Fri, Sep 02, 2016 at 03:48:13PM -0700, John Nagle wrote:
> On 09/02/2016 01:04 PM, Patrick Figel wrote:
> >On 02/09/16 21:14, John Nagle wrote:
> >>2. For certs under this root cert, always check CA's certificate
> >>transparency server. Fail if not found.
> >
> >To my knowledge, CT does not have any kind of online check
> >mechanism. SCTs can be embedded in the certificate (at the time of
> >issuance), delivered as part of the TLS handshake or via OCSP
> >stapling.
>
> You're supposed to be able to check if a cert is known by
> querying an OCSP responder. OCSP stapling is just a faster way
> to do that.

OCSP stapling is also a *privacy preserving* way to do that (also more
reliable, in addition to faster). I'm not sure that essentially snooping
(or at least having the ability to snoop) on the browsing habits of users
who happen to connect to a website that uses the certificate of a
poorly-trusted CA better serves the user community than just pulling the
root. I guess at least we're not training users to ignore security warnings
this way, and since if Mozilla is running the OCSP responder (or similar)
you're already trusting Mozilla not to snoop on your browsing...

- Matt

Matt Palmer

unread,
Sep 2, 2016, 7:24:18 PM9/2/16
to dev-secur...@lists.mozilla.org
On Fri, Sep 02, 2016 at 11:19:11AM +0100, Gervase Markham wrote:
> On 31/08/16 20:43, Nick Lamb wrote:
> > This suggests the need for some options short of distrust which can
> > be deployed instead, but Mozilla does not seem to have any. If in
> > fact it already does, this would be a great place to say what they
> > are and discuss why they haven't been able to be used in recent
> > cases.
>
> Have you considered what was done for CNNIC? In that case, we distrusted
> all certificates issued after a certain time. We used a whitelist for
> determining this, but it would be possible to use the notBefore date in
> the certificate. A CA could dodge this by backdating, but if the CA were
> also committed to putting all its certs in to CT, then the backdating
> would be noticeable.

Is the idea here that the incentive for the CA to "behave" is potential
future re-inclusion? If we catch 'em doing the dodgy (which, with Google's
super-spider powers and submitting everything to CT, is at least *fairly*
likely these days) then any hope of their ever being re-included is
completely gone? Pulling the root at that future juncture of misissuance
might also be more palatable, since presumably the number of valid certs
being used in the wild should be reduced.

> > 1. Implement "Require SCTs" for problematic CAs. Notify the CA they
> > are obliged to CT log all certificates, inform subscribers etc. or
> > their subscriber's certificates will suddenly be invalid in Firefox
> > from some future date.
>
> This is not currently possible in Firefox, as Firefox does not have the
> ability to check SCTs. We hope to have that ability soon.

Even if Firefox was checking SCTs, as another poster said, if practically
every site needs to reconfigure themselves to deal with this, we may as well
just pull the root. Heck, getting a cert from somewhere else is almost
certainly *less* hassle than setting up SCT-embedded OCSP stapling or SCTs
in the TLS handshake. As far embedding SCTs in the certs, I thought the
plan was to have the problematic CA *not* issue more certs...

> > 2. Create "at risk" category for problematic CAs which lasts some
> > finite period of time (or a period to be set in each case). Notify
> > the CA they are obliged to warn their subscribers of this status or
> > leave the Mozilla programme immediately. Publicly announce "at risk"
> > status and drive as PR.
>
> One issue to consider with this option would be that reputational damage
> is harder to quantify and control than a technical measure, which might
> be said to increase the risk that the action would be disproportionate.

Not to mention, where's the incentive for the CA to do this? If pulling the
root was a valid sanction, it'd be done. So, if the CA says "we're not
doing that" (or, as is more likely, "we'll do it... later", or "yes, we've
done it" but we have no way to verify it) what does Mozilla do next?

- Matt

--
I don't do veggies if I can help it. -- stevo
If you could see your colon, you'd be horrified. -- Iain Broadfoot
If he could see his colon, he'd be management. -- David Scheidt

Patrick Figel

unread,
Sep 2, 2016, 7:46:02 PM9/2/16
to dev-secur...@lists.mozilla.org
> Mozilla not to snoop on your browsing...

In addition to these concerns, (and assuming Mozilla would even be
willing to go down that route), I'm not sure how reliable a
Mozilla-operated OCSP responder would be given that the majority of
users who visit sites that use WoSign are probably behind the GFW.

If the answer is somewhere between "unreliable" and "extremely slow",
you might just as well pull the root (just for the sake of this
argument), which would mostly inconvenience site operators (as opposed
to every single Firefox user).

Matt Palmer

unread,
Sep 2, 2016, 7:58:34 PM9/2/16
to dev-secur...@lists.mozilla.org
Ryan's earlier treatise against just pulling the root centred around
training users to ignore security warnings -- which is, let's face it,
exactly what would happen. At least with an OCSP responder, users might get
a degraded experience, but at least not *every* Firefox user who visits
ones of 100k+ sites gets a lesson in where the "ignore cert error" button
is.

I suppose for a blacklisted root, Firefox could make it an unfixable error,
but that just trains people to go find a different browser -- so unless
there's a concerted agreement between browsers (an ugly thing from a legal
perspective, at least) to do this together, it's unlikely to be palatable.

- Matt

John Nagle

unread,
Sep 3, 2016, 5:31:43 PM9/3/16
to dev-secur...@lists.mozilla.org
> Date: Sat, 3 Sep 2016 01:45:48 +0200
> From: Patrick Figel <patf...@gmail.com>
> Subject: Re: Sanctions short of distrust
>
> On 03/09/16 01:15, Matt Palmer wrote:
>> On Fri, Sep 02, 2016 at 03:48:13PM -0700, John Nagle wrote:
>>> On 09/02/2016 01:04 PM, Patrick Figel wrote:
>>>> On 02/09/16 21:14, John Nagle wrote:
>>>>> 2. For certs under this root cert, always check CA's
>>>>> certificate transparency server. Fail if not found.
>>>>
>>>> To my knowledge, CT does not have any kind of online check
>>>> mechanism. SCTs can be embedded in the certificate (at the time
>>>> of issuance), delivered as part of the TLS handshake or via OCSP
>>>> stapling.
>>>
>>> You're supposed to be able to check if a cert is known by querying
>>> an OCSP responder. OCSP stapling is just a faster way to do
>>> that.
>>
...
> In addition to these concerns, (and assuming Mozilla would even be
> willing to go down that route), I'm not sure how reliable a
> Mozilla-operated OCSP responder would be given that the majority of
> users who visit sites that use WoSign are probably behind the GFW.

It would probably be necessary to offer an OCSP responder in
China. Mozilla already has a small presence in China. See

https://www.mozilla.org/en-US/contact/spaces/beijing/

So Mozilla can apply for an ICP license, if it doesn't
have one already, and obtain server capacity in China.

John Nagle
SiteTruth

Jakob Bohm

unread,
Sep 5, 2016, 11:56:07 AM9/5/16
to mozilla-dev-s...@lists.mozilla.org
On 03/09/2016 01:23, Matt Palmer wrote:
> On Fri, Sep 02, 2016 at 11:19:11AM +0100, Gervase Markham wrote:
>> On 31/08/16 20:43, Nick Lamb wrote:
>>> ...
>> ...
> ...
>>> 1. Implement "Require SCTs" for problematic CAs. Notify the CA they
>>> are obliged to CT log all certificates, inform subscribers etc. or
>>> their subscriber's certificates will suddenly be invalid in Firefox
>>> from some future date.
>>
>> This is not currently possible in Firefox, as Firefox does not have the
>> ability to check SCTs. We hope to have that ability soon.
>
> Even if Firefox was checking SCTs, as another poster said, if practically
> every site needs to reconfigure themselves to deal with this, we may as well
> just pull the root. Heck, getting a cert from somewhere else is almost
> certainly *less* hassle than setting up SCT-embedded OCSP stapling or SCTs
> in the TLS handshake. As far embedding SCTs in the certs, I thought the
> plan was to have the problematic CA *not* issue more certs...
>

Indeed, I have found that a number of common web server implementations
simply lack the ability to do OCSP stapling at all.

Kurt Roeckx

unread,
Sep 6, 2016, 3:31:33 AM9/6/16
to mozilla-dev-s...@lists.mozilla.org
On 2016-09-05 17:55, Jakob Bohm wrote:
> Indeed, I have found that a number of common web server implementations
> simply lack the ability to do OCSP stapling at all.

I would really like to see OCSP stapling as mandatory. There currently
only seem to be around 25% of the servers that do it, and the progress
seem to be very slow. I'm wondering if there is something we can do so
that it's used more.

About the only idea I have is to do something with TLS 1.3, like if you
have a non self-signed certificate OCSP stapling is mandatory. But I
don't see that working out.


Kurt

Nick Lamb

unread,
Sep 6, 2016, 4:13:34 AM9/6/16
to mozilla-dev-s...@lists.mozilla.org
On Tuesday, 6 September 2016 08:31:33 UTC+1, Kurt Roeckx wrote:
> I would really like to see OCSP stapling as mandatory. There currently
> only seem to be around 25% of the servers that do it, and the progress
> seem to be very slow. I'm wondering if there is something we can do so
> that it's used more.

We see a small but significant fraction of servers where if they enable OCSP stapling everything breaks because their server isn't permitted to access the OCSP server (usually a well-intentioned firewall rule forbidding outbound TCP connections to the Internet from the web server) and it staples the resulting error to the certificate, then browsers reject the resulting unverified certificate.

Quality of implementation for OCSP stapling seems to remain poor in at least apache and nginx, two of the most popular servers. Apache's in particular gives me that OpenSSL "We read this standards document and implemented everything in it as a series of config options without any understanding" feeling, rather than Apache's maintainers taking it upon themselves to figure out what will actually work best for most servers and implementing that.

I would also be more enthusiastic about multi-stapling than the original stapling, since I get the impression that all too often any problems aren't with the leaf certificate's OCSP response but with an intermediate, and so stapling only the response for the leaf won't help there.

Kurt Roeckx

unread,
Sep 6, 2016, 4:26:29 AM9/6/16
to mozilla-dev-s...@lists.mozilla.org
On 2016-09-06 10:13, Nick Lamb wrote:
> Quality of implementation for OCSP stapling seems to remain poor in at least apache and nginx, two of the most popular servers. Apache's in particular gives me that OpenSSL "We read this standards document and implemented everything in it as a series of config options without any understanding" feeling, rather than Apache's maintainers taking it upon themselves to figure out what will actually work best for most servers and implementing that.

If you think there is something we can do in OpenSSL to improve this,
please let us know.


Kurt

Jakob Bohm

unread,
Sep 6, 2016, 8:17:12 AM9/6/16
to mozilla-dev-s...@lists.mozilla.org
Here are a list of software where I have personally observed bad OCSP
stapling support:

OpenSSL 1.0.x itself: There are hooks to provide stapled leaf OCSP
responses in sessions, but no meaningful sample code to do this right
(e.g. caching, error handling etc.) I am working on my own add-on code
for this, but it is not complete and not deployed.
There is no builtin support for multistapling and no clear
documentation on how to add arbitrary TLS extensions (such as this) to
an OpenSSL application.

OpenSSL 1.1.x itself: This is a heavily rewritten library and very new
at this time, basic reliability procedures suggest waiting a few patch
levels before deployment.

Stunnel stand alone SSL/TLS filter (used with e.g. Varnish reverse
proxies): OCSP stapling is on their TODO-list, but not yet included.

Pound light-weight reverse proxy with SSL/TLS front end: No OCSP
stapling support in the standard version.

IIS for Windows Server 2008 (latest IIS supporting pure 32 bit
configurations): No obvious (if any) OCSP stapling support.

Kurt Roeckx

unread,
Sep 6, 2016, 9:38:28 AM9/6/16
to mozilla-dev-s...@lists.mozilla.org
On 2016-09-06 14:16, Jakob Bohm wrote:
> On 06/09/2016 10:25, Kurt Roeckx wrote:
>> If you think there is something we can do in OpenSSL to improve this,
>> please let us know.
>
> Here are a list of software where I have personally observed bad OCSP
> stapling support:
>
> OpenSSL 1.0.x itself: There are hooks to provide stapled leaf OCSP
> responses in sessions, but no meaningful sample code to do this right
> (e.g. caching, error handling etc.) I am working on my own add-on code
> for this, but it is not complete and not deployed.

As far as I know the functions for that are:
https://www.openssl.org/docs/manmaster/ssl/SSL_set_tlsext_status_type.html

> There is no builtin support for multistapling and no clear
> documentation on how to add arbitrary TLS extensions (such as this) to
> an OpenSSL application.

SSL_CTX_add_server_custom_ext() was added in 1.0.2, see
https://www.openssl.org/docs/manmaster/ssl/SSL_CTX_add_server_custom_ext.html

PS: I just found: https://istlsfastyet.com/

This is probably also getting a little off topic.


Kurt

Jakob Bohm

unread,
Sep 6, 2016, 10:15:21 AM9/6/16
to mozilla-dev-s...@lists.mozilla.org
Neither of those calls (which I know) provide the lacking
functionality. Specifically, the _tlsext_ OCSP calls require each
server to design and build its own OCSP response acquisition and
caching code. While the _server_custom_ functions seemingly lack the
functionality to implement multistapling, at least as I read them.

>
> PS: I just found: https://istlsfastyet.com/
>
> This is probably also getting a little off topic.
>

But yes, the details of OpenSSL are off-topic in this newsgroup, this
was merely two entries in a long list of HTTPS server implementations
that cannot be easily configured to send the OCSP stapling responses
that some other posters suggested would be an appropriate workaround
for half-bad CAs.

The point of the list was simply to explain why requiring OCSP stapling
would not work on the current Internet.

Martin Rublik

unread,
Sep 6, 2016, 10:43:34 AM9/6/16
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Tue, Sep 6, 2016 at 2:16 PM, Jakob Bohm <jb-mo...@wisemo.com> wrote:

> Here are a list of software where I have personally observed bad OCSP
> stapling support:
>
> IIS for Windows Server 2008 (latest IIS supporting pure 32 bit
> configurations): No obvious (if any) OCSP stapling support.


AFAIK IIS 7.0 supports OCSP stapling and it is enabled by default, for more
information see https://unmitigatedrisk.com/?p=95 or
https://www.digicert.com/ssl-support/windows-enable-ocsp-stapling-on-server.htm


Martin

Jakob Bohm

unread,
Sep 6, 2016, 10:54:14 AM9/6/16
to mozilla-dev-s...@lists.mozilla.org
Nice surprise (if true), this was unreasonably well hidden, for example
there is no indication of this in any relevant parts of the
administration user interface. I'll have to device a test to check if
it actually does staple OCSP on our servers.

Ryan Hurst

unread,
Sep 6, 2016, 12:15:28 PM9/6/16
to mozilla-dev-s...@lists.mozilla.org
It is true. Windows (and IIS as a result) was the first to support OCSP stapling and has the most robust support for it. Sleevi has a nice summary OCSP stapling issues here - https://gist.github.com/sleevi/5efe9ef98961ecfb4da8

Lets start a new thread to discuss OCSP stapling vs re-using this one.

Jakob Bohm

unread,
Sep 6, 2016, 12:43:35 PM9/6/16
to mozilla-dev-s...@lists.mozilla.org
As I stated elsewhere, the only point of mentioning OCSP problems in
here was to counter repeated suggestions in this thread that adding
something to stapled OCSP responses would be a viable solution
to dealing with partially distrusted CAs. I had no intention of
discussing the details of OCSP stapling implementation in this forum.

Rob Stradling

unread,
Sep 8, 2016, 7:09:25 AM9/8/16
to Patrick Figel, na...@animats.com, dev-secur...@lists.mozilla.org
On 02/09/16 21:04, Patrick Figel wrote:
<snip>
> I believe there are two possible solutions if CT enforcement is what the
> community decides on:
>
> 1. Enforce CT only after a certain date, after which WoSign will need
> to embed qualified SCTs. This check can be bypassed if the CA
> backdates certificates (which is problematic, given the history of
> backdating certificates in this particular case.)

AIUI, Chrome doesn't currently consider the difference between the
certificate's notBefore date and the corresponding SCTs' timestamps when
evaluating whether or not the certificate is "CT qualified".

To guard against backdating, ISTM that future versions of Chrome (and
hopefully Firefox too) could require, for certain CAs, that:
1. The certificate MUST be "CT qualified".
and
2. In addition to the standard requirements for being "CT qualified",
the SCT timestamps MUST be within N seconds of the certificate's
notBefore date.

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

Ryan Sleevi

unread,
Sep 8, 2016, 12:44:19 PM9/8/16
to mozilla-dev-s...@lists.mozilla.org
On Thursday, September 8, 2016 at 4:09:25 AM UTC-7, Rob Stradling wrote:
> > 1. Enforce CT only after a certain date, after which WoSign will need
> > to embed qualified SCTs. This check can be bypassed if the CA
> > backdates certificates (which is problematic, given the history of
> > backdating certificates in this particular case.)
>
> AIUI, Chrome doesn't currently consider the difference between the
> certificate's notBefore date and the corresponding SCTs' timestamps when
> evaluating whether or not the certificate is "CT qualified".
>
> To guard against backdating, ISTM that future versions of Chrome (and
> hopefully Firefox too) could require, for certain CAs, that:
> 1. The certificate MUST be "CT qualified".
> and
> 2. In addition to the standard requirements for being "CT qualified",
> the SCT timestamps MUST be within N seconds of the certificate's
> notBefore date.

Without wanting to derail this thread with discussions of Chrome's CT implementations, to the point it's relevant to a Firefox implementation, this is something better as part of the monitoring ecosystem (and with a CA/B Forum Guideline) than as a client enforcement.

For pre-certificates: The earliest (trusted) SCT represents a reasonable lower-bound of the issuance date, for purposes of policy. That is, for certificates issued with embedded SCTs, any policy controls can be applied on the basis of the embedded SCT, if present.

However, for SCTs delivered via OCSP or TLS extensions, there is zero relationship between the notBefore and any SCTs. This is because the set of SCTs delivered with a certificate - or the logs in which a certificate is logged - may change over a certificate's lifetime, as part of the normal operation of trusting and distrusting logs.

Consider a hypothetical situation, possible today, in which a certificate is logged to two logs (Google Pilot and DigiCert). These are the only logs the certificate is logged in, and the cert is logged within a few hours of issuance, and SCTs delivered via OCSP. Now, let's imagine that the Pilot log is distrusted one year into the cert's 3 year lifetime. In order to comply with Chrome's policy (and what Chrome believes is a good guideline for other UAs, at least at present), the cert would need to be logged to Google's Aviator or Rocketeer logs, which have as-of-yet never seen the certificate. When the CA does so, the SCT will be issued at least 1y > the cert's notBefore, and then embedded in the OCSP response (such that the new set is Aviator, DigiCert). Now, imagine that 2y into the cert's lifetime, DigiCert's log is distrusted, and so the cert is then logged to Symantec's log. Now, the SCT set is Aviator, Symantec, and the SCTs are respectively 1y and 2y newer than the cert. This is not proof of backdating.

While the above example is, arguably, convoluted, and ignores the ways in which a log may and has been distrusted (e.g. 'freezing' a log at a particular timestamp, as has been done so far for the operational failures that have caused log distrust), it's meant to highlight that the assumption and design of CT don't require a relationship between the notBefore and the SCT validity period of the date.

Obviously, for pre-certificates, the act of backdating can be trivially detected, since it's cryptographically guaranteed that the cert cannot have been issued earlier than the latest SCT embedded within it (assuming the log's clock is functioning correctly), but this doesn't apply to other SCT delivery mechanisms, and this is arguably by design.

Among other reasons, I believe this is why the client complexity is not worth it, but that this can and should be addressed through a combination of explicit policy actions (within Mozilla policy) and within the Baseline Requirements, and there are many ways in which we can further an auditable criteria to ensure this, and in a programatically detectable way, without foisting this upon clients to enforce.

Matt Palmer

unread,
Sep 8, 2016, 7:33:26 PM9/8/16
to dev-secur...@lists.mozilla.org
On Thu, Sep 08, 2016 at 09:44:04AM -0700, Ryan Sleevi wrote:
> On Thursday, September 8, 2016 at 4:09:25 AM UTC-7, Rob Stradling wrote:
> > > 1. Enforce CT only after a certain date, after which WoSign will need
> > > to embed qualified SCTs. This check can be bypassed if the CA
> > > backdates certificates (which is problematic, given the history of
> > > backdating certificates in this particular case.)
> >
> > AIUI, Chrome doesn't currently consider the difference between the
> > certificate's notBefore date and the corresponding SCTs' timestamps when
> > evaluating whether or not the certificate is "CT qualified".
> >
> > To guard against backdating, ISTM that future versions of Chrome (and
> > hopefully Firefox too) could require, for certain CAs, that:
> > 1. The certificate MUST be "CT qualified".
> > and
> > 2. In addition to the standard requirements for being "CT qualified",
> > the SCT timestamps MUST be within N seconds of the certificate's
> > notBefore date.
>
> Without wanting to derail this thread with discussions of Chrome's CT
> implementations, to the point it's relevant to a Firefox implementation,
> this is something better as part of the monitoring ecosystem (and with a
> CA/B Forum Guideline) than as a client enforcement.

I read Rob's proposal as one specifically for CAs under some sort of
"quintuple secret probation" arrangement, where there has been a history of
shenanigans with notBefore dates. In that circumstance, as a proactive
enforcement mechanism, it makes a certain degree of sense. Post-hoc
detection through CT logs is without value for such CAs; the reason they're
on probation is because they've got too many customers to just pull the
root and ship a valid certs whitelist, so if further shenanigans is detected
via CT, what recourse is there available? Revocation is off the table,
because it's under the CA's control, and we've seen recent examples of
revocation equivocation from CAs.

Therefore, proactive enforcement of standards on a cert-by-cert basis is,
as far as I can tell, about all there is left. Do you see things
differently?

- Matt

Rob Stradling

unread,
Sep 9, 2016, 7:28:32 AM9/9/16
to Matt Palmer, dev-secur...@lists.mozilla.org
On 09/09/16 00:32, Matt Palmer wrote:
> On Thu, Sep 08, 2016 at 09:44:04AM -0700, Ryan Sleevi wrote:
>> On Thursday, September 8, 2016 at 4:09:25 AM UTC-7, Rob Stradling wrote:
>>>> 1. Enforce CT only after a certain date, after which WoSign will need
>>>> to embed qualified SCTs. This check can be bypassed if the CA
>>>> backdates certificates (which is problematic, given the history of
>>>> backdating certificates in this particular case.)
>>>
>>> AIUI, Chrome doesn't currently consider the difference between the
>>> certificate's notBefore date and the corresponding SCTs' timestamps when
>>> evaluating whether or not the certificate is "CT qualified".
>>>
>>> To guard against backdating, ISTM that future versions of Chrome (and
>>> hopefully Firefox too) could require, for certain CAs, that:
>>> 1. The certificate MUST be "CT qualified".
>>> and
>>> 2. In addition to the standard requirements for being "CT qualified",
>>> the SCT timestamps MUST be within N seconds of the certificate's
>>> notBefore date.
>>
>> Without wanting to derail this thread with discussions of Chrome's CT
>> implementations, to the point it's relevant to a Firefox implementation,
>> this is something better as part of the monitoring ecosystem (and with a
>> CA/B Forum Guideline) than as a client enforcement.
>
> I read Rob's proposal as one specifically for CAs under some sort of
> "quintuple secret probation" arrangement, where there has been a history of
> shenanigans with notBefore dates.

Yes, precisely that.

Rob Stradling

unread,
Sep 9, 2016, 7:42:12 AM9/9/16
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
On 08/09/16 17:44, Ryan Sleevi wrote:
> On Thursday, September 8, 2016 at 4:09:25 AM UTC-7, Rob Stradling wrote:
>>> 1. Enforce CT only after a certain date, after which WoSign will need
>>> to embed qualified SCTs. This check can be bypassed if the CA
>>> backdates certificates (which is problematic, given the history of
>>> backdating certificates in this particular case.)
>>
>> AIUI, Chrome doesn't currently consider the difference between the
>> certificate's notBefore date and the corresponding SCTs' timestamps when
>> evaluating whether or not the certificate is "CT qualified".
>>
>> To guard against backdating, ISTM that future versions of Chrome (and
>> hopefully Firefox too) could require, for certain CAs, that:
>> 1. The certificate MUST be "CT qualified".
>> and
>> 2. In addition to the standard requirements for being "CT qualified",
>> the SCT timestamps MUST be within N seconds of the certificate's
>> notBefore date.
>
> Without wanting to derail this thread with discussions of Chrome's CT implementations, to the point it's relevant to a Firefox implementation, this is something better as part of the monitoring ecosystem (and with a CA/B Forum Guideline) than as a client enforcement.
>
> For pre-certificates: The earliest (trusted) SCT represents a reasonable lower-bound of the issuance date, for purposes of policy. That is, for certificates issued with embedded SCTs, any policy controls can be applied on the basis of the embedded SCT, if present.
>
> However, for SCTs delivered via OCSP or TLS extensions, there is zero relationship between the notBefore and any SCTs. This is because the set of SCTs delivered with a certificate - or the logs in which a certificate is logged - may change over a certificate's lifetime, as part of the normal operation of trusting and distrusting logs.
>
> Consider a hypothetical situation, possible today, in which a certificate is logged to two logs (Google Pilot and DigiCert). These are the only logs the certificate is logged in, and the cert is logged within a few hours of issuance, and SCTs delivered via OCSP. Now, let's imagine that the Pilot log is distrusted one year into the cert's 3 year lifetime. In order to comply with Chrome's policy (and what Chrome believes is a good guideline for other UAs, at least at present), the cert would need to be logged to Google's Aviator or Rocketeer logs, which have as-of-yet never seen the certificate. When the CA does so, the SCT will be issued at least 1y > the cert's notBefore, and then embedded in the OCSP response (such that the new set is Aviator, DigiCert). Now, imagine that 2y into the cert's lifetime, DigiCert's log is distrusted, and so the cert is then logged to Symantec's log. Now, the SCT set is Aviator, Symantec, and the SCTs are respectively 1y and 2y newer than the cer
> t. This is not proof of backdating.
>
> While the above example is, arguably, convoluted, and ignores the ways in which a log may and has been distrusted (e.g. 'freezing' a log at a particular timestamp, as has been done so far for the operational failures that have caused log distrust), it's meant to highlight that the assumption and design of CT don't require a relationship between the notBefore and the SCT validity period of the date.

That's a good point. So, to fix my proposal...

For CAs that are on (borrowing Matt's wording) "quintuple secret
probation" due to a "history of shenanigans with notBefore dates",
browsers could require that:
1. The certificate MUST be "CT qualified" via SCTs embedded in the
certificate. (SCTs distributed by the two TLS extension mechanisms are
not permitted for these CAs).
and
2. The latest timestamp from the embedded SCTs MUST be within N
seconds of the certificate's notBefore date.

> Obviously, for pre-certificates, the act of backdating can be trivially detected, since it's cryptographically guaranteed that the cert cannot have been issued earlier than the latest SCT embedded within it (assuming the log's clock is functioning correctly), but this doesn't apply to other SCT delivery mechanisms, and this is arguably by design.
>
> Among other reasons, I believe this is why the client complexity is not worth it, but that this can and should be addressed through a combination of explicit policy actions (within Mozilla policy) and within the Baseline Requirements, and there are many ways in which we can further an auditable criteria to ensure this, and in a programatically detectable way, without foisting this upon clients to enforce.

I only intended to suggest a technical mechanism that could conceivably
be implemented.

To evaluate "worth doing", I fear I'd have to wade deep into related
policy matters. ;-)

(Yeah, I know - spoken like a true IETF-er ;-) ).

Ryan Sleevi

unread,
Sep 9, 2016, 1:25:52 PM9/9/16
to mozilla-dev-s...@lists.mozilla.org
On Friday, September 9, 2016 at 4:42:12 AM UTC-7, Rob Stradling wrote:
> That's a good point. So, to fix my proposal...
>
> For CAs that are on (borrowing Matt's wording) "quintuple secret
> probation" due to a "history of shenanigans with notBefore dates",
> browsers could require that:

Right, I suppose I could have been clearer - I don't think there's a "quintuple secret probation" concept, and that promoting it as such is probably harmful, long term, to both Mozilla users and the overall ecosystem.

We shouldn't think of CT as a 'punishment' or 'probationary period'. Transparency is just one aspect of public trust, and all CAs - whether misissuance or not - should ideally adopt CT in a verifiable way.

While it's true that some CAs may have timelines for CT accelerated to improve trust by improving transparency, we should be careful against advocating solutions that trying to bifurcate trust.

Matt Palmer

unread,
Sep 9, 2016, 8:30:53 PM9/9/16
to dev-secur...@lists.mozilla.org
On Fri, Sep 09, 2016 at 12:41:31PM +0100, Rob Stradling wrote:
> For CAs that are on (borrowing Matt's wording) "quintuple secret
> probation" due to a "history of shenanigans with notBefore dates",
> browsers could require that:
> 1. The certificate MUST be "CT qualified" via SCTs embedded in the
> certificate. (SCTs distributed by the two TLS extension mechanisms are
> not permitted for these CAs).

I'd be OK with SCTs delivered via TLS extension or OCSP stapling, as long as
the SCTs had an appropriately aged timestamp. That's not practical for site
operators to enforce, but from an "assurance of non-backdating" perspective,
it'd be fine.

The problem with embedding precert SCTs into the certificate is that in
order to ensure compliance, *every* cert (including previously issued certs)
from the CA in question must have the SCTs embedded (you can't use notBefore
as a cutoff, because that's the problem we're trying to solve!). That means
that every site operator of the CA in question has to get a new cert issued
and installed. If we're going to force every site operator to get a new
cert, we may as well just pull the root and force site operators to get a
new cert issued from a different CA entirely. It's the same amount of
effort for site operators, and avoids the need to modify user agents to
cope with checking SCTs against notBefore.

- Matt

Matt Palmer

unread,
Sep 9, 2016, 8:36:06 PM9/9/16
to dev-secur...@lists.mozilla.org
On Fri, Sep 09, 2016 at 10:25:43AM -0700, Ryan Sleevi wrote:
> On Friday, September 9, 2016 at 4:42:12 AM UTC-7, Rob Stradling wrote:
> > That's a good point. So, to fix my proposal...
> >
> > For CAs that are on (borrowing Matt's wording) "quintuple secret
> > probation" due to a "history of shenanigans with notBefore dates",
> > browsers could require that:
>
> Right, I suppose I could have been clearer - I don't think there's a
> "quintuple secret probation" concept, and that promoting it as such is
> probably harmful, long term, to both Mozilla users and the overall
> ecosystem.

Whilst the name was somewhat tongue-in-cheek, the concept is certainly
existent -- Symantec being required by Chrome to pre-log all certs issued
was the precedent I had in mind. There, of course, the phased rollout was
easier, because there was no indication of any notBefore shenanigans, so
trusting it as an indicator of "policy requirement" was reasonable.

> We shouldn't think of CT as a 'punishment' or 'probationary period'.
> Transparency is just one aspect of public trust, and all CAs - whether
> misissuance or not - should ideally adopt CT in a verifiable way.

Oh, absolutely -- having all certs logged to CT is by far the best long-term
outcome for the entire ecosystem. The future is not equally distributed,
though, just yet. <grin>

- Matt

Rob Stradling

unread,
Sep 12, 2016, 4:54:31 AM9/12/16
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
On 09/09/16 18:25, Ryan Sleevi wrote:
> On Friday, September 9, 2016 at 4:42:12 AM UTC-7, Rob Stradling wrote:
>> That's a good point. So, to fix my proposal...
>>
>> For CAs that are on (borrowing Matt's wording) "quintuple secret
>> probation" due to a "history of shenanigans with notBefore dates",
>> browsers could require that:
>
> Right, I suppose I could have been clearer - I don't think there's a "quintuple secret probation" concept, and that promoting it as such is probably harmful, long term, to both Mozilla users and the overall ecosystem.
>
> We shouldn't think of CT as a 'punishment' or 'probationary period'.

I was thinking of it as a 'consequence'. ;-)

> Transparency is just one aspect of public trust, and all CAs - whether misissuance or not - should ideally adopt CT in a verifiable way.

+1, of course.

> While it's true that some CAs may have timelines for CT accelerated to improve trust by improving transparency, we should be careful against advocating solutions that trying to bifurcate trust.

True.

Ryan Sleevi

unread,
Sep 12, 2016, 5:46:40 PM9/12/16
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, August 31, 2016 at 12:43:50 PM UTC-7, Nick Lamb wrote:
> I have spent some time thinking about this, but I am only one person, and one with relatively little in-depth knowledge of the Mozilla project, so I think I will lay out the options I've thought about and invite feedback though particularly any alternative suggestions.

Returning to the topic of this thread, there were a number of alternatives put forward that didn't raise to the summary level captured here:

https://groups.google.com/d/msg/mozilla.dev.security.policy/k9PBmyLCi8I/hj_KMPGUDAAJ

To recap, and to combine your suggestions, we have:
A) Distrust with option to manually add back
B) Distrust with no option to manually add back (aka Blacklist)
C) Distrust with Whitelist of Certs
C.1) Whitelist based on all certs (several hundred thousand)
C.2) Whitelist based Alexa Top 1M (~60 thousand)
D) Trust w/ Heuristic
D.1) Certs with notBefore < some date trusted implicitly, certs with notBefore > some date are CT logged
D.2) Trust if C=CN is present in subject name (WoSign's proposal)
E) Trust w/ PR
E.1) Require some form of mandatory disclosure of misissuance
F) Trust w/o any action


Nick's proposal #1 is was already captured by D1 in the previous thread.
Nick's proposal #2 is captured by E1
Nick's proposal #3 is captured by F (no concrete steps were proposed)

I've not included the RP discussion in the above summary, because I don't believe it talks to actions short of distrust, but about alternative methods of determining trustworthiness. It doesn't provide an actionable framework, for, say dealing with issues that arise when a CA behaves against the public interest.


To that end, I'm going to offer one more suggestion for consideration:
G) Distrust with a Whitelist of Hosts

The issue with C is that it becomes easily inflated by issuing certificates, even if they're not used; that is, a free certificate provider can easily exceed a reasonable size of a whitelist, by issuing many certificates for a given host, even if they're not used.

Whitelisting by hostname may offer a reasonable solution that balances user need and performance.


Consider if we start with the list of certificates issued by StartCom and WoSign, assuming the two are the same party (as all reasonable evidence suggests). Extract the subjectAltName from every one of these certificates, and then compare against the Alexa Top 1M. This yields more than 60K certificates, at 1920K in a 'naive' whitelist.

However, if you compare based on base domain (as it appears in Alexa), you end up with 18,763 unique names, with a much better compressibility. For example, when compared with Chrome's Public Suffix List DAFSA implementation (as one such compressed data structure implementation), this ends up occupying 126K of storage.

126K may be within the range of acceptable to ship within a binary. Further, there are a number of things that can be done to reduce this overhead:

1) Large vendors (such as microsoft.com and amazonaws.com) appear within this list, but likely don't wish to; this gives a natural reduction function
2) This doesn't fully account for revocations
3) It could be combined with, say, requiring CT for new certs. While this is hardly a perfect function, given the backdating, it could effectively monitor the issuance of new certificates.
4) The Alexa list includes known third-party domain providers (e.x. myqnapcloud.com, myasustor.com) that could be factored out, as they're not centrally managed
5) Names naturally expire on this list, giving a reasonable stepping function


This could be combined with a large scale distrust, and operates on the theory that the primary concerns with a distrust event are minimizing user impact, while allowing sites sufficient time to migrate, and this could offer a one-or-two-release-cycle "graceful turndown" of a CA. If other browsers were able to adopt such a solution, such that the industry could be in rough harmony, this could offer a graduated point to distrust.

Adam Caudill

unread,
Sep 12, 2016, 8:28:06 PM9/12/16
to mozilla-dev-s...@lists.mozilla.org
Of all the possible options - G seems to be the most practical. It provides a few key benefits, that I see as making it a clear leader:

1) It can be implemented quickly. It has been discussed that C is rather complex because of the size of the list, with the only truly practical solution being the development of a new method of querying certificates to see if they are in the list. That would take time to design, build, and coordinate with various vendors to implement. Given the severity of the issues, and the lack of transparency, I believe it's clear that a solution to minimize future harm is needed as quickly as possible.

2) As pointed out, the list would naturally shrink over time; as domain names expire, certificates are revoked or expire (without new ones being issued), the list could be updated to make it smaller and smaller over time. So the impact of size will diminish as time goes on.

3) This provides an opportunity for a very clearly defined turndown - establish a date of general distrust when whitelist because active, and date of whitelist expiration. This makes it easy for all to have a general idea of when these restrictions will be enforced, giving vendors and customers clear expectations and time to prepare.

4) This also provides an opportunity for StartCom / WoSign to attempt to clean up their act, and continue servicing existing customers (while minimizing risk by limiting new certificates) during the process. They could re-apply for inclusion, demonstrating they they are now actually in compliance without excessive harm to them or to users. While it would be easiest to simply call for the death penalty and be done, this could serve as a substantial enough wake-up call to get them to correct their issues and operate properly. When faced with the impending doom of the company, they are likely to be willing to consider more options to truly correct issues. Personally, I'm not at all convinced that the current management will be able to correct the issues, but if there's a way forward that can minimize risk and provide them a chance to change sufficiently that it minimizes impact to customers and end-users, that seems like the route to pursue.

When trying to strike a balance that preserves trust, and minimizes impact to users, there is no perfect solution, but this option seems to strike a very reasonable balance.

David Adrian

unread,
Sep 12, 2016, 8:45:09 PM9/12/16
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
In general, I like Option G idea purely as a space saving version of Option
C. That is, whitelisting certificates would be ideal.


>
> The issue with C is that it becomes easily inflated by issuing
> certificates, even if they're not used; that is, a free certificate
> provider can easily exceed a reasonable size of a whitelist, by issuing
> many certificates for a given host, even if they're not used.
>
> Whitelisting by hostname may offer a reasonable solution that balances
> user need and performance.
>
>
> Consider if we start with the list of certificates issued by StartCom and
> WoSign, assuming the two are the same party (as all reasonable evidence
> suggests). Extract the subjectAltName from every one of these certificates,
> and then compare against the Alexa Top 1M. This yields more than 60K
> certificates, at 1920K in a 'naive' whitelist.
>
> However, if you compare based on base domain (as it appears in Alexa), you
> end up with 18,763 unique names, with a much better compressibility. For
> example, when compared with Chrome's Public Suffix List DAFSA
> implementation (as one such compressed data structure implementation), this
> ends up occupying 126K of storage.
>

While I personally have no issue drawing the distinction at the Alexa Top
1M, this is still somewhat of an arbitrary cut off. Beyond the arbitrary
nature, there's a large amount of churn of domains within the Top 1M. A
co-researcher in my group encountered over 1.5M unique domains in the Top
1M over a two month period from March-May 2016. I haven't ran the long term
numbers myself, but will note that Censys/ScansIO have historical Top 1M
inclusion data [1]. Should this churn be factored into the whitelist? Which
Top 1M list is the "right" Top 1M list?


> 126K may be within the range of acceptable to ship within a binary.
> Further, there are a number of things that can be done to reduce this
> overhead:
>
> 1) Large vendors (such as microsoft.com and amazonaws.com) appear within
> this list, but likely don't wish to; this gives a natural reduction function
> 2) This doesn't fully account for revocations
> 3) It could be combined with, say, requiring CT for new certs. While this
> is hardly a perfect function, given the backdating, it could effectively
> monitor the issuance of new certificates.
>

Given that we _know_ backdating is an issue, I don't really see the point
of requiring CT for new certs, especially if the end goal is to remove
StartCom/WoSign (I don't know what any other end goal would be). If we're
worried about actual malicious issuance, acquiring a backdated certificate
seems well within reason. That being said, in the current
post-Symantec-require-CT-world, the bar to adding this requirement seems
low, provided StartCom/WoSign are embedding SCTs in new certificates.


> 4) The Alexa list includes known third-party domain providers (e.x.
> myqnapcloud.com, myasustor.com) that could be factored out, as they're
> not centrally managed
> 5) Names naturally expire on this list, giving a reasonable stepping
> function
>

6) I would also add that some domains will likely change certificate
authorities, as the WoSign news percolates outside of the "HTTPS
Enthusiast" circles, enabling the list to be stepped down even faster.


> This could be combined with a large scale distrust, and operates on the
> theory that the primary concerns with a distrust event are minimizing user
> impact, while allowing sites sufficient time to migrate, and this could
> offer a one-or-two-release-cycle "graceful turndown" of a CA. If other
> browsers were able to adopt such a solution, such that the industry could
> be in rough harmony, this could offer a graduated point to distrust.
>

[1]: https://scans.io/series/alexa-dl-top1mil



> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

Peter Bowen

unread,
Sep 12, 2016, 9:09:05 PM9/12/16
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
On Mon, Sep 12, 2016 at 2:46 PM, Ryan Sleevi <ry...@sleevi.com> wrote:
> To that end, I'm going to offer one more suggestion for consideration:
> G) Distrust with a Whitelist of Hosts
>
> The issue with C is that it becomes easily inflated by issuing certificates, even if they're not used; that is, a free certificate provider can easily exceed a reasonable size of a whitelist, by issuing many certificates for a given host, even if they're not used.
>
> Whitelisting by hostname may offer a reasonable solution that balances user need and performance.
>
>
> Consider if we start with the list of certificates issued by StartCom and WoSign, assuming the two are the same party (as all reasonable evidence suggests). Extract the subjectAltName from every one of these certificates, and then compare against the Alexa Top 1M. This yields more than 60K certificates, at 1920K in a 'naive' whitelist.
>
> However, if you compare based on base domain (as it appears in Alexa), you end up with 18,763 unique names, with a much better compressibility. For example, when compared with Chrome's Public Suffix List DAFSA implementation (as one such compressed data structure implementation), this ends up occupying 126K of storage.
>
> 126K may be within the range of acceptable to ship within a binary. Further, there are a number of things that can be done to reduce this overhead:
>
> 1) Large vendors (such as microsoft.com and amazonaws.com) appear within this list, but likely don't wish to; this gives a natural reduction function
> 2) This doesn't fully account for revocations
> 3) It could be combined with, say, requiring CT for new certs. While this is hardly a perfect function, given the backdating, it could effectively monitor the issuance of new certificates.
> 4) The Alexa list includes known third-party domain providers (e.x. myqnapcloud.com, myasustor.com) that could be factored out, as they're not centrally managed
> 5) Names naturally expire on this list, giving a reasonable stepping function

I'm not clear from this description, are you proposing to whitelist
eTLD+1 or full hostnames? How large would the lists be if you
whitelisted eTLD+1?

I also think per-issuer whitelists might make more sense than a single
massive WoSign/StartCom whitelist. This would have two advantages:
1) Helps limit blast radius of whitelisting a name/domain
2) Provides a path for WoSign/StartCom to continue issuing
certificates for use with trust stores and users that continue to
trust them. They could create new issuers from the existing roots
which wouldn't be subject to the whitelist. Users who want to trust
them simply need to add the roots to their trust store if the roots
are not already there.

Requiring new issuers would also allow #3 to be easily enforced over a
disjoint set from the whitelist and allow various pinning-like rules
to be created in software (browsers or otherwise).

Thanks,
Peter

Richard Wang

unread,
Sep 12, 2016, 9:14:34 PM9/12/16
to mozilla-dev-s...@lists.mozilla.org
Please don't mix StartCom with WoSign case, StartCom is 100% independent at 2015.

Even now, it still independent in the system, in the validation team and management team, we share the CRL/OCSP distribution resource only.


Best Regards,

Richard

-----Original Message-----
From: dev-security-policy [mailto:dev-security-policy-bounces+richard=wosig...@lists.mozilla.org] On Behalf Of Adam Caudill
Sent: Tuesday, September 13, 2016 7:30 AM
To: mozilla-dev-s...@lists.mozilla.org
Subject: Re: Sanctions short of distrust

Ryan Sleevi

unread,
Sep 12, 2016, 10:03:04 PM9/12/16
to mozilla-dev-s...@lists.mozilla.org
On Monday, September 12, 2016 at 6:09:05 PM UTC-7, Peter Bowen wrote:
> I'm not clear from this description, are you proposing to whitelist
> eTLD+1 or full hostnames? How large would the lists be if you
> whitelisted eTLD+1?

This is whitelisting at eTLD+1 (although with a slightly older version of the PSL - hence reference to domains my myasustor.com); that is, it tries to be compatible with Alexa's notion of "effective domain".

> I also think per-issuer whitelists might make more sense than a single
> massive WoSign/StartCom whitelist.

>From a security standpoint, absolutely; the technical details I was intentionally being a little handwavy on (e.g. you can get better compression when using a single list), but at least now it gives us concrete numbers to talk about.

> This would have two advantages:
> 1) Helps limit blast radius of whitelisting a name/domain

I'm unclear what you mean by this. I suggest there's an additional, unstated threat model or concern?

> 2) Provides a path for WoSign/StartCom to continue issuing
> certificates for use with trust stores and users that continue to
> trust them. They could create new issuers from the existing roots
> which wouldn't be subject to the whitelist. Users who want to trust
> them simply need to add the roots to their trust store if the roots
> are not already there.

If such a path seems desirable and worthwhile, that's certainly true. Call that a variation of G, then, perhaps G.2, as perhaps not all trust stores would consider a proverbial path to redemption at the Intermediate level, and might only consider it at the Root level.

> Requiring new issuers would also allow #3 to be easily enforced over a
> disjoint set from the whitelist and allow various pinning-like rules
> to be created in software (browsers or otherwise).

Possibly. However, unless #3 is implemented simultaneously (and as you know, it has a dependency on a CA component), it's more likely that implementing #2 and requiring a new root is functionally the same, but with significant less complexity in code or for relying parties.

Ryan Sleevi

unread,
Sep 12, 2016, 10:05:27 PM9/12/16
to mozilla-dev-s...@lists.mozilla.org
On Monday, September 12, 2016 at 6:14:34 PM UTC-7, Richard Wang wrote:
> Please don't mix StartCom with WoSign case, StartCom is 100% independent at 2015.
>
> Even now, it still independent in the system, in the validation team and management team, we share the CRL/OCSP distribution resource only.

There is ample, growing, and unaddressed evidence suggesting otherwise.

However, for purposes of this thread, and this discussion, and based upon the evidence shared thus far (and presumably, more to come), and based on your current set of responses, it seems reasonable that Relying Parties be concerned about ensuring solutions address both CAs, if people conclude they are the same and as the evidence clearly supports, since we can assume a solution that addresses both will, however imperfectly or suboptimally, also be able to address one in individual.

Peter Bowen

unread,
Sep 12, 2016, 11:01:36 PM9/12/16
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
On Mon, Sep 12, 2016 at 7:02 PM, Ryan Sleevi <ry...@sleevi.com> wrote:
> On Monday, September 12, 2016 at 6:09:05 PM UTC-7, Peter Bowen wrote:
>> This would have two advantages:
>> 1) Helps limit blast radius of whitelisting a name/domain
>
> I'm unclear what you mean by this. I suggest there's an additional, unstated threat model or concern?

I'm trying to think of this as potentially reusable code. Just
because IssuerA is quasi-trusted for example.com doesn't mean IssuerB
should be. From a logic perspective, setting the whitelist per issuer
means you are basically creating name-constrained issuers.

>> 2) Provides a path for a CA to continue issuing
>> certificates for use with trust stores and users that continue to
>> trust them. They could create new issuers from the existing roots
>> which wouldn't be subject to the whitelist. Users who want to trust
>> them simply need to add the roots to their trust store if the roots
>> are not already there.
>
> If such a path seems desirable and worthwhile, that's certainly true. Call that a variation of G, then, perhaps G.2, as perhaps not all trust stores would consider a proverbial path to redemption at the Intermediate level, and might only consider it at the Root level.

I guess I wasn't clear enough. I'm thinking that the trust store
would remove RootA but add IssuerA1, IssuerA2, etc as trust anchors.
So RootA could create IssuerNewA12 or whatever and issuer from that.
As the trust store doesn't have RootA at all (not trusted or
distrusted), users could choose how to handle RootA going forward.
RootA's operator could also apply to have RootA added back to the
trust store at some future point.

>> Requiring new issuers would also allow #3 to be easily enforced over a
>> disjoint set from the whitelist and allow various pinning-like rules
>> to be created in software (browsers or otherwise).
>
> Possibly. However, unless #3 is implemented simultaneously (and as you know, it has a dependency on a CA component), it's more likely that implementing #2 and requiring a new root is functionally the same, but with significant less complexity in code or for relying parties.

Again, assuming issuers are added as trust anchors, a "shortest path"
algorithm would choose them over a re-added Root. The Root could have
additional restrictions applied (e.g. Expect-CT) that don't apply to
the old issuers.

Thanks,
Peter

Jakob Bohm

unread,
Sep 12, 2016, 11:30:07 PM9/12/16
to mozilla-dev-s...@lists.mozilla.org
A variation of this, would be to create (compacted) whitelists for
specific old intermediary certs, then tag the CA root as requiring
other measures (such as CT) where not overridden via whitelisting.
That way, the CA cannot bypass the measure by creating new intermediary
certs for which no trust restrictions exist.

Peter Bowen

unread,
Sep 13, 2016, 10:04:56 AM9/13/16
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
On Mon, Sep 12, 2016 at 2:46 PM, Ryan Sleevi <ry...@sleevi.com> wrote:
>
> Consider if we start with the list of certificates issued by StartCom and WoSign [...] Extract the subjectAltName from every one of these certificates, and then compare against the Alexa Top 1M. This yields more than 60K certificates, at 1920K in a 'naive' whitelist.
>
> However, if you compare based on base domain (as it appears in Alexa), you end up with 18,763 unique names, with a much better compressibility. For example, when compared with Chrome's Public Suffix List DAFSA implementation (as one such compressed data structure implementation), this ends up occupying 126K of storage.
>
> 126K may be within the range of acceptable to ship within a binary. Further, there are a number of things that can be done to reduce this overhead:

I did a couple of similar tests. First, I used the PSL, excluding the
"private" portion, to get the base domains for each issuer under a
WoSign or StartCom root. Then I turned the result into a serialized
tree with minimal optimizations and compressed the result with lzma.
Using the certs currently in CT logs, I got a 1.5MB data file.

I did the same thing only including base domains which also are base
domains in the Alexa top million file, and got 97KB

There is a huge unknown for both of these, and that is StartCom's true
number of issued certs and domains. As far as I know, StartCom has
not logged all their 2015 certs and is probably missing some early
2016 as well. If it turns out there are a lot more StartCom certs
than currently known, then I think any decision may have to be split
between StartCom and WoSign. However, based on the known data today
that doesn't seem necessary from a pure size perspective.

Thanks,
Peter

Ryan Sleevi

unread,
Sep 13, 2016, 10:43:10 AM9/13/16
to mozilla-dev-s...@lists.mozilla.org
On Monday, September 12, 2016 at 8:01:36 PM UTC-7, Peter Bowen wrote:
> I'm trying to think of this as potentially reusable code. Just
> because IssuerA is quasi-trusted for example.com doesn't mean IssuerB
> should be. From a logic perspective, setting the whitelist per issuer
> means you are basically creating name-constrained issuers.

To the immediate relevance, we know that two issuers can be, as is alleged w/ considerable evidence with StartCom & WoSign.

My point in mentioning that, again, is to demonstrate the practical implications, since that is as relevant - and moreso - than the generic solution at this point in the conversation. But I think we're in agreement that it's not mutually exclusive.

> I guess I wasn't clear enough. I'm thinking that the trust store
> would remove RootA but add IssuerA1, IssuerA2, etc as trust anchors.
> So RootA could create IssuerNewA12 or whatever and issuer from that.
> As the trust store doesn't have RootA at all (not trusted or
> distrusted), users could choose how to handle RootA going forward.
> RootA's operator could also apply to have RootA added back to the
> trust store at some future point.

In every variation that I can interpret this as, this seems to be more work for limited value; among other things, it presumes adding back Root A is sufficient to avoid the whitelist, but the complexity involved in there, and the objective opinion being offered by software, makes that seem a profoundly unwise and undesirable scenario.

I think if the goal is to allow re-recognition, that should be spelled out, and we should carefully think of the conditions of that and the implications of it. In my mind, it's not and shouldn't be a goal, if the overarching goal is to protect user security.

> Again, assuming issuers are added as trust anchors, a "shortest path"
> algorithm would choose them over a re-added Root. The Root could have
> additional restrictions applied (e.g. Expect-CT) that don't apply to
> the old issuers.

I'm not aware of any path building library that prefers a "shortest-path" algorithm. They prefer "fastest path I can discover that works" algorithm.

Ryan Sleevi

unread,
Sep 13, 2016, 10:47:23 AM9/13/16
to mozilla-dev-s...@lists.mozilla.org
On Monday, September 12, 2016 at 8:30:07 PM UTC-7, Jakob Bohm wrote:
> A variation of this, would be to create (compacted) whitelists for
> specific old intermediary certs,

It sounds like you haven't been following this conversation, but the entire point of restarting this thread, and in the previous discussion, was that magic (compacted) whitelists are a bit like magic beans; yes, they can solve all our problems, but they don't exist, and so we have to decide what to do with the remaining costs.

In this case, the fundamental concern is that a whitelist of certs is too large, even compacted, and probabilistic structures are also too large and too risky when compacted to a desired size.

So we end up with alternative whitelists, such as what I proposed.

> then tag the CA root as requiring
> other measures (such as CT) where not overridden via whitelisting.
> That way, the CA cannot bypass the measure by creating new intermediary
> certs for which no trust restrictions exist.

This is literally part of what I proposed. "It could be combined with, say, requiring CT for new certs."

Ryan Sleevi

unread,
Sep 13, 2016, 10:53:56 AM9/13/16
to mozilla-dev-s...@lists.mozilla.org
On Tuesday, September 13, 2016 at 7:04:56 AM UTC-7, Peter Bowen wrote:
> There is a huge unknown for both of these, and that is StartCom's true
> number of issued certs and domains. As far as I know, StartCom has
> not logged all their 2015 certs and is probably missing some early
> 2016 as well. If it turns out there are a lot more StartCom certs
> than currently known, then I think any decision may have to be split
> between StartCom and WoSign. However, based on the known data today
> that doesn't seem necessary from a pure size perspective.

Just to make sure I'm fully understanding your point - your argument is that it might be necessary to treat StartCom and WoSign differently, if it turned out treating them the same blows out some size budget, but you don't believe, based on the data provided, that it will, is that correct?

I agree that we can and should encourage StartCom to log all their certificates, but I don't believe it would or should materially or substantially change the results. For example, we can assume that sites trafficed in the Alexa Top 1M are more likely to be crawled by Google or seen by Censys, right? So the odds of missing some cert are in the long-tail, and not in the core data.

We also see a variety of domains using certs from either for purposes that are ostensibly not relevant to browsers - a frequent dead give-away is a cert for autodiscover.[example.com] - which is an Exchange AutoConfiguration server not used by browsers - and mail.[example.com]. I would assert we can be reasonably confident that critical services should generally not be impacted if such a cert was not included.

Peter Bowen

unread,
Sep 13, 2016, 10:56:20 AM9/13/16
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
On Tue, Sep 13, 2016 at 7:53 AM, Ryan Sleevi <ry...@sleevi.com> wrote:
> We also see a variety of domains using certs from either for purposes that are ostensibly not relevant to browsers - a frequent dead give-away is a cert for autodiscover.[example.com] - which is an Exchange AutoConfiguration server not used by browsers - and mail.[example.com]. I would assert we can be reasonably confident that critical services should generally not be impacted if such a cert was not included.

I would be careful reading too much into server names.
mail.[example.com] might host web based email access. For example,
I'm typing this into a site called mail.google.com :)

Ryan Sleevi

unread,
Sep 13, 2016, 11:19:03 AM9/13/16
to mozilla-dev-s...@lists.mozilla.org
On Tuesday, September 13, 2016 at 7:56:20 AM UTC-7, Peter Bowen wrote:
> I would be careful reading too much into server names.
> mail.[example.com] might host web based email access. For example,
> I'm typing this into a site called mail.google.com :)

Apologies that the conjunctive and was not clearer, and that it seemed more enumerative. My point was that some certificates demonstrate patterns - such as *both* names - that offer reasonable signals of use.

I agree that any heuristic approach leaves me profoundly uncomfortable as a policy, but I would also suggest that some patterns in the certs are signals that perhaps the impact to users, however great, may be overestimated.

Of course, all of this is based on the data we have - I agree, that if StartCom were to log its 2015/2016 certs, we'd be in a much better place to evaluate viability of minimizing user impact, if such a thing is at all possible.

Nick Lamb

unread,
Sep 13, 2016, 11:39:57 AM9/13/16
to mozilla-dev-s...@lists.mozilla.org
(Apologies for shortness and lack of context. My home is being redecorated so no non-work PCs powered on)

Ryan's example doesn't work, autodiscover is a sign of MS Exchange but that means OWA Outlook Web Access may be enabled. Which means web browsers see that certificate.

Jakob Bohm

unread,
Sep 13, 2016, 12:09:35 PM9/13/16
to mozilla-dev-s...@lists.mozilla.org
On 13/09/2016 16:47, Ryan Sleevi wrote:
> On Monday, September 12, 2016 at 8:30:07 PM UTC-7, Jakob Bohm wrote:
>> A variation of this, would be to create (compacted) whitelists for
>> specific old intermediary certs,
>
> It sounds like you haven't been following this conversation, but the entire point of restarting this thread, and in the previous discussion, was that magic (compacted) whitelists are a bit like magic beans; yes, they can solve all our problems, but they don't exist, and so we have to decide what to do with the remaining costs.
>
> In this case, the fundamental concern is that a whitelist of certs is too large, even compacted, and probabilistic structures are also too large and too risky when compacted to a desired size.
>
> So we end up with alternative whitelists, such as what I proposed.
>

Which is exactly the proposal i referred to as "compacted whitelists".

>> then tag the CA root as requiring
>> other measures (such as CT) where not overridden via whitelisting.
>> That way, the CA cannot bypass the measure by creating new intermediary
>> certs for which no trust restrictions exist.
>
> This is literally part of what I proposed. "It could be combined with, say, requiring CT for new certs."
>

My small suggestion was to reverse part of the logic, regarding which
intermediaries are the special case, and which are the "common" case
for the CA. As a simplification that might be easier to implement in
multiple browsers and other clients.

Jakob Bohm

unread,
Sep 13, 2016, 12:14:33 PM9/13/16
to mozilla-dev-s...@lists.mozilla.org
Also please beware that all/most of the Mozilla certificate trust and
checking code is also used in Mozilla's mail client Thunderbird, to
check certificates for IMAP, POP and SMTP servers.
Message has been deleted

Ryan Sleevi

unread,
Sep 16, 2016, 9:38:25 PM9/16/16
to mozilla-dev-s...@lists.mozilla.org
For further sake of exploring options, I've been looking at non-public sources to see what other options exist as alternatives.

One example set was looking at the hosts visited by GoogleBot over a 60 day period and seeing if any of the certificates seen for a host matched the certificates logged in CT.

That is, imagine the key as being constructed from [hash of cert] + [hostname from SAN] for certificates from CT, and in cases of GoogleBot crawls, [hash of cert] + [hostname from link] and [hash of cert] [*.hostname minus a label]. That is, if GoogleBot crawled "www.google.com", it would emit keys for both "*.google.com" and "www.google.com" (to allow it to match with a cert for either name, since browsers will accept either name)

While unfortunately, I'm unable to share the specific results, even in buckets, it does suggest that if one were to examine hosts reported in these certificates, with whether or not they use these certificates or are publicly accessible, and further intersect with the Alexa Top 1M, any whitelisting strategy (by host, by domain, or by certificate) could fit in under 50K, with some strategies going below 10K. The reasoning for this is that a number of hosts represented in the certificate don't use the certificate, and instead use it from some other CA provider. A number have switched, for example, to Let's Encrypt, obviating the need for whitelisting.

Unfortunately, that's not easily publicly reproducible, which I think is an important aspect for consideration here.

So let's again revisit the combined set of WoSign & StartCom certs (which necessarily includes everything GoogleBot has ever seen, but not necessarily any undisclosed and undetected StartCom certs)

We know there are 5769 unique certificate hashes with wildcards in the Alexa Top 1M, over 2710 distinct eTLD+1s. There are 61,109 certs that contain non-wildcard hosts, over 18,650 distinct eTLD+1s.

Another possibility to explore, then, is to attempt to communicate with each of these hosts and see the certificate they provide, since we can't use hosts mined by Google's crawler (oh how I wish we could). If they provide one of these certificates, the eTLD+1 could be whitelisted, as well as the generous assumption that all wildcard hosts are using their certificates (I believe there's sufficient evidence this isn't the case, but sure).

This may help reduce the overall 18,763 distinct eTLD+1s into a even more compressible set, albeit at the cost of potentially excluding some certificates that were (undetectably) in use.

Gervase Markham

unread,
Sep 21, 2016, 5:22:11 AM9/21/16
to mozilla-dev-s...@lists.mozilla.org
On 12/09/16 22:46, Ryan Sleevi wrote:
> Consider if we start with the list of certificates issued by StartCom
> and WoSign, assuming the two are the same party (as all reasonable
> evidence suggests). Extract the subjectAltName from every one of
> these certificates, and then compare against the Alexa Top 1M. This
> yields more than 60K certificates, at 1920K in a 'naive' whitelist.
>
> However, if you compare based on base domain (as it appears in
> Alexa), you end up with 18,763 unique names, with a much better
> compressibility. For example, when compared with Chrome's Public
> Suffix List DAFSA implementation (as one such compressed data
> structure implementation), this ends up occupying 126K of storage.

Can you tell us how many unique base domains (PSL+1) there are across
WoSign and StartCom's entire certificate corpus, and what that might
look like as a DAFSA?

Gerv

Rob Stradling

unread,
Sep 21, 2016, 10:07:29 AM9/21/16
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On 21/09/16 10:21, Gervase Markham wrote:
> On 12/09/16 22:46, Ryan Sleevi wrote:
>> Consider if we start with the list of certificates issued by StartCom
>> and WoSign, assuming the two are the same party (as all reasonable
>> evidence suggests). Extract the subjectAltName from every one of
>> these certificates, and then compare against the Alexa Top 1M. This
>> yields more than 60K certificates, at 1920K in a 'naive' whitelist.
>>
>> However, if you compare based on base domain (as it appears in
>> Alexa), you end up with 18,763 unique names, with a much better
>> compressibility. For example, when compared with Chrome's Public
>> Suffix List DAFSA implementation (as one such compressed data
>> structure implementation), this ends up occupying 126K of storage.
>
> Can you tell us how many unique base domains (PSL+1) there are across
> WoSign and StartCom's entire certificate corpus,

Hi Gerv.

I ran some queries earlier today on the crt.sh DB, to find all CNs,
dNSNames and iPAddresses in all unexpired certs whose issuer names
include either "WoSign" or "StartCom". Then I cross-referenced that
with the latest PSL data to discover the unique base domains:

WoSign:
Unique CNs/dNSNames: 395,222
Unique Base Domains: 118,785
Unique IP Addresses: 154

StartCom:
Unique CNs/dNSNames: 706,020
Unique Base Domains: 249,841
Unique IP Addresses: 0

> and what that might look like as a DAFSA?

I don't know how to answer that question, but hopefully the lists of
unique base domains that I generated will help...

https://gist.githubusercontent.com/robstradling/813138699b8527c1af58b4aa784c2d8f/raw/902883344a973103020c35a905d6c25bd4994887/wosign_base_domains.txt

https://gist.githubusercontent.com/robstradling/813138699b8527c1af58b4aa784c2d8f/raw/902883344a973103020c35a905d6c25bd4994887/startcom_base_domains.txt

Peter Bowen

unread,
Sep 21, 2016, 11:04:27 AM9/21/16
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On Wed, Sep 21, 2016 at 2:21 AM, Gervase Markham <ge...@mozilla.org> wrote:
> On 12/09/16 22:46, Ryan Sleevi wrote:
>> Consider if we start with the list of certificates issued by StartCom
>> and WoSign, assuming the two are the same party (as all reasonable
>> evidence suggests). Extract the subjectAltName from every one of
>> these certificates, and then compare against the Alexa Top 1M. This
>> yields more than 60K certificates, at 1920K in a 'naive' whitelist.
>>
>> However, if you compare based on base domain (as it appears in
>> Alexa), you end up with 18,763 unique names, with a much better
>> compressibility. For example, when compared with Chrome's Public
>> Suffix List DAFSA implementation (as one such compressed data
>> structure implementation), this ends up occupying 126K of storage.
>
> Can you tell us how many unique base domains (PSL+1) there are across
> WoSign and StartCom's entire certificate corpus, and what that might
> look like as a DAFSA?

I'm not sure about a DAFSA, but I wrote a semi-naive implementation of
a compressed trie and got 1592272 bytes. That is assuming each issuer
has its own trie. It could be optimized to be smaller if it was just
a single trie of eTLD+1 for all issuers.

Thanks,
Peter

Peter Kurrasch

unread,
Sep 21, 2016, 3:05:49 PM9/21/16
to mozilla-dev-s...@lists.mozilla.org
I have a hard time seeing how any sort of white list solution will actually mitigate any of the bad behavior exhibited by WoSign. From my perspective, I think we can make a pretty clear case that WoSign is a poorly run CA and poses a threat to the secure Internet that many of us are trying to achieve. They have many, serious bugs in their systems. Their responsiveness in fixing these problems is slow. Their understanding of security threats is limited. Their interest in compliance seems minimal.‎ Their willingness to be forthright, honest, open in this forum can only be described as unacceptable. 

So the problem I have with a white list is the implication that while we don't trust the CA to issue new certs, we do have trust in the continued operation of other parts of the CA. Chief among these is revocation as cert holders move away from WoSign to a new CA. Do we trust that WoSign will honor requsts for certs to be revoked? Do we trust that revocation will take place in a timely matter? Do we trust that WoSign will not collect information on hits to any OCSP responders they have set up and share that info with...whomever?

I'm just having a hard time seeing how there is anything left to trust when it comes to WoSign. Maybe the best outcome would be a finding of irreconcilable differences and for us to go our separate ways? Maybe we just want different things in a global PKI system?


From: Peter Bowen
Sent: Wednesday, September 21, 2016 10:04 AM
To: Gervase Markham
Subject: Re: Sanctions short of distrust

On Wed, Sep 21, 2016 at 2:21 AM, Gervase Markham <ge...@mozilla.org> wrote:
> On 12/09/16 22:46, Ryan Sleevi wrote:
>> Consider if we start with the list of certificates issued by StartCom
>> and WoSign, assuming the two are the same party (as all reasonable
>> evidence suggests). Extract the subjectAltName from every one of
>> these certificates, and then compare against the Alexa Top 1M. This
>> yields more than 60K certificates, at 1920K in a 'naive' whitelist.
>>
>> However, if you compare based on base domain (as it appears in
>> Alexa), you end up with 18,763 unique names, with a much better
>> compressibility. For example, when compared with Chrome's Public
>> Suffix List DAFSA implementation (as one such compressed data
>> structure implementation), this ends up occupying 126K of storage.
>
> Can you tell us how many unique base domains (PSL+1) there are across
> WoSign and StartCom's entire certificate corpus, and what that might
> look like as a DAFSA?

I'm not sure about a DAFSA, but I wrote a semi-naive implementation of
a compressed trie and got 1592272 bytes. That is assuming each issuer
has its own trie. It could be optimized to be smaller if it was just
a single trie of eTLD+1 for all issuers.

Thanks,
Peter

Ryan Sleevi

unread,
Sep 21, 2016, 3:27:30 PM9/21/16
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, September 21, 2016 at 12:05:49 PM UTC-7, Peter Kurrasch wrote:
> I have a hard time seeing how any sort of white list solution will actually mitigate any of the bad behavior exhibited by WoSign.

This doesn't help understand where your disconnect is, or how we might educate and inform you about different perspectives.

> So the problem I have with a white list is the implication that while we don't trust the CA to issue new certs, we do have trust in the continued operation of other parts of the CA.

Once a certificate is issued, it's issued. What continued operations, beyond revocation (which doesn't work in the Web PKI) do you see as necessary?

> I'm just having a hard time seeing how there is anything left to trust when it comes to WoSign. Maybe the best outcome would be a finding of irreconcilable differences and for us to go our separate ways? Maybe we just want different things in a global PKI system?

It's unclear who you're referring to here. I think, judging by some of your replies, that some of the experts in this space don't agree with you or your conclusions, but this may simply be a teachable opportunity.

To try to explain to you:
A wholesale distrust is, in effect, a statement that we believe no certificate, past, present, or future, is trustworthy. This is a very strong statement, and it's very hard to make, even under significant evidence, but is sometimes necessary (for example, when an unknown number of unconstrained sub-CAs have been issued).

However, if you're willing to believe that no unconstrained sub-CAs exist, and if you're willing to accept that most, but not all, certificates were issued according to the policies and community expectations, then such a statement is overly harsh. By overly harsh, I'm not considering the reception of the CA, I'm considering the message that browser vendors would be sending to users and to sites that have chosen to use such certificates.

For example, do you believe that if a user tries to access https://www.wosign.com, they should be shown an interstitial? Do you believe that is a helpful message to end users? Do we believe that the specific certificate is untrustworthy?

In most CA cases, when evidence of malfeasance is discovered, it's not 100% of the certificates. It might be .001%. But that .001% is significant enough to be uncomfortable to trust NEW certificates, because that margin is too high. Further, once disclosed via CT, we can reasonably be confident that EXISTING certificates conform to appropriate policies.

The only continuinity of business that a CA would potentially need to provide, in the event of a distrusting, is OCSP and CRLs. And we know those simply don't work at the WebPKI, which is why CRLSets and OneCRL and Certificate Distrust List exists. So I have trouble with your suggestion that a whitelist is an indication of continued trust in a CA, other than it's a recognition of the fact: "Most" of the certs are probably OK, but "new" certs have too high a margin of risk to continue to be accepted.

Rob Stradling

unread,
Sep 21, 2016, 3:41:03 PM9/21/16
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On 21/09/16 15:06, Rob Stradling wrote:
<snip>
> I ran some queries earlier today on the crt.sh DB, to find all CNs,
> dNSNames and iPAddresses in all unexpired certs whose issuer names
> include either "WoSign" or "StartCom". Then I cross-referenced that
> with the latest PSL data to discover the unique base domains:

Someone contacted me off-list (thanks!) to point out that my lists were
incomplete. I'd missed a load of base domains delegated below the new
gTLDs. (I hadn't spotted that my local copy of
https://data.iana.org/TLD/tlds-alpha-by-domain.txt was rather out of date).

Updated count and gists...

WoSign:
Unique Base Domains: 127,355

StartCom:
Unique Base Domains: 259,712

https://gist.githubusercontent.com/robstradling/813138699b8527c1af58b4aa784c2d8f/raw/11fc8efbb0e594a12b3c5e2e76d9a9e474e24ea9/wosign_base_domains.txt

https://gist.githubusercontent.com/robstradling/813138699b8527c1af58b4aa784c2d8f/raw/11fc8efbb0e594a12b3c5e2e76d9a9e474e24ea9/startcom_base_domains.txt

> WoSign:
> Unique CNs/dNSNames: 395,222
> Unique Base Domains: 118,785
> Unique IP Addresses: 154
>
> StartCom:
> Unique CNs/dNSNames: 706,020
> Unique Base Domains: 249,841
> Unique IP Addresses: 0
<snip>

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online
Office Tel: +44.(0)1274.730505
Office Fax: +44.(0)1274.730909
www.comodo.com

COMODO CA Limited, Registered in England No. 04058690
Registered Office:
3rd Floor, 26 Office Village, Exchange Quay,
Trafford Road, Salford, Manchester M5 3EQ

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error please notify the
sender by replying to the e-mail containing this attachment. Replies to
this email may be monitored by COMODO for operational or business
reasons. Whilst every endeavour is taken to ensure that e-mails are free
from viruses, no liability can be accepted and the recipient is
requested to use their own virus checking software.

Richard Wang

unread,
Sep 21, 2016, 9:20:43 PM9/21/16
to Peter Kurrasch, mozilla-dev-s...@lists.mozilla.org
I know WoSign make some mistakes in 2015, and I accept any reasonable fair enough sanction. But WoSign will continue to do our best to provide best products and best service to worldwide customers, no matter what the sanction is.
Here is the answer for your questions:

> Do we trust that WoSign will honor requests for certs to be revoked?

Yes, we honor your requests for certs to be revoked for FREE according to our CPS. We used Akamai CDN for worldwide customer to provide best CRL/OCSP service.

> Do we trust that revocation will take place in a timely matter?

Yes, we will take place your revocation request in a timely matter that exceed your expectation – within 24 hours (24 x 365 non-stop).

> Do we trust that WoSign will not collect information on hits to any OCSP responders they have set up and share that info with...whomever?

Yes, any CA can do this if need. But you can use OCSP Stapling in your web server.
We don’t worry about most China online banking system and many ecommerce website using the foreign CA certificate, what do you worry about? As I said, we used Akamai CDN service that all hits will go to Akamai Edge servers first.


Best Regards,

Richard Wang
CEO
WoSign CA limited

Peter Kurrasch

unread,
Sep 21, 2016, 10:00:47 PM9/21/16
to mozilla-dev-s...@lists.mozilla.org
Well, well. Here we are again, Ryan, with you launching into a bullying, personal attack on me instead of seeking to understand where I'm coming from and why I say the things I say. You may have noticed that I do not reply to your messages because I generally find your tone to be disrespectful of others and occasionally narrow-minded. This time, however, I will reply.

‎You know nothing of my knowledge, my experience, or the things that I need to be taught. You are not the arbiter of what is reasonable or sensible in this forum. I hardly think you are the one person in here to determine if people agree or disagree with me--and, for that matter, who qualifies as an expert. If Kathleen or Gerv or Richard decide that I'm disruptive and not providing any value to the wider population, I'll happily withdraw from this forum. I participate here because I enjoy it, though I obviously don't enjoy being attacked personally.

‎If I am a fool, let me be a fool. If I say things that don't make sense and you seek to know why, ask me questions. If you see no value in what I have to say, so be it; others in this forum might think otherwise. That's all I ask of anyone.


From: Ryan Sleevi
Sent: Wednesday, September 21, 2016 2:27 PM‎

...snip...

Gervase Markham

unread,
Sep 22, 2016, 5:10:04 AM9/22/16
to Peter Kurrasch
On 22/09/16 03:00, Peter Kurrasch wrote:
> Well, well. Here we are again, Ryan, with you launching into a bullying,
> personal attack on me instead of seeking to understand where I'm coming
> from and why I say the things I say.

Er, no. I am entirely comfortable with saying that if you found Ryan's
message to be a bullying, personal attack then your skin is too thin.
(Which would surprise me, given what I know of you.)

Ryan's message, while possibly carrying a slightly exasperated tone, was
a reasonable exposition of the trade-offs inherent in various options
for dis-trusting a CA, trade-offs which you seem unwilling to recognise.
I'm sad that you don't see this as a set of trade-offs, but perhaps
there's little I or Ryan can do about it.

> ‎If Kathleen or Gerv or Richard decide that I'm
> disruptive and not providing any value to the wider population, I'll
> happily withdraw from this forum.

I am not requesting that you withdraw, although you should know that the
level of account taken of what you say is approximately proportional to
the level of understanding that you show of the perspectives of all
parties involved - including those currently using WoSign certificates
for their sites.

Gerv

Jakob Bohm

unread,
Sep 22, 2016, 1:49:25 PM9/22/16
to mozilla-dev-s...@lists.mozilla.org
While you are at it:

1. How many WoSign/StartCom certificates did you find with domains not
on that IANA list?

2. How many WoSign/StartCom certificates did you find for other uses
than https://www.example.tld:

2.1 Certificates for "odd" subdomains such as "extranet.example.com"

2.2 Certificates for e-mail

2.3 Code signing certificates

2.4 Others?

Eric Mill

unread,
Sep 22, 2016, 2:55:43 PM9/22/16
to Richard Wang, Peter Kurrasch, mozilla-dev-s...@lists.mozilla.org
On Wed, Sep 21, 2016 at 6:18 PM, Richard Wang <ric...@wosign.com> wrote:

>
> > Do we trust that WoSign will not collect information on hits to any OCSP
> responders they have set up and share that info with...whomever?
>
> Yes, any CA can do this if need. But you can use OCSP Stapling in your web
> server.
> We don’t worry about most China online banking system and many ecommerce
> website using the foreign CA certificate, what do you worry about? As I
> said, we used Akamai CDN service that all hits will go to Akamai Edge
> servers first.
>

In an earlier thread, someone posted a screenshot of what appeared to be a
marketing email sent to Let's Encrypt customers, warning them about foreign
CAs.

The screenshot image was: https://pbs.twimg.com/media/CrXf7w3W8AA2zd7.jpg:
large

And the text as translated by the person who posted the screenshot (which I
haven't personally verified) was:

The risks associated with foreign CA:
1. Cert revocation
If foreign CA is influenced by politics and revoke certs for important
Chinese organizations, the entire system will be paralyzed.

2. Information security risks
If the website uses foreign certs, users need to send information to
foreign servers in every visit. Time of the visit, the location of the
visit, IP addresses, and the browser, frequency of the visits are all
collected by foreign CA. This will leak commercial secrets and sensitive
data, and is a very risky!


Here, you're saying you don't consider it to be a threat, and that you
don't worry if most Chinese online banking and ecommerce websites use a
foreign CA. Was the screenshot of WoSign's marketing email accurate? And if
so, what is WoSign committing to doing w/r/t OCSP metadata that it doesn't
trust foreign CAs to do?

-- Eric


>
>
> Best Regards,
>
> Richard Wang
> CEO
> WoSign CA limited
>
>
> From: dev-security-policy [mailto:dev-security-policy-bounces+richard=
> wosig...@lists.mozilla.org] On Behalf Of Peter Kurrasch
> Sent: Thursday, September 22, 2016 3:06 AM
> To: mozilla-dev-s...@lists.mozilla.org
> Subject: Time to distrust (was: Sanctions short of distrust)
>
> Do we trust that WoSign will honor requsts for certs to be revoked? Do we
> trust that revocation will take place in a timely matter? Do we trust that
> WoSign will not collect information on hits to any OCSP responders they
> have set up and share that info with...whomever?
>
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>



--
konklone.com | @konklone <https://twitter.com/konklone>
Message has been deleted

Peter Gutmann

unread,
Sep 23, 2016, 6:52:05 AM9/23/16
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
Jakob Bohm <jb-mo...@wisemo.com> writes:

>While you are at it:
>
>1. How many WoSign/StartCom certificates did you find with domains not
> on that IANA list?
>
>2. How many WoSign/StartCom certificates did you find for other uses
> than https://www.example.tld:
>
>2.1 Certificates for "odd" subdomains such as "extranet.example.com"
>
>2.2 Certificates for e-mail
>
>2.3 Code signing certificates
>
>2.4 Others?

Note that if you ding WoSign for this you'd also need to indict half the
commercial CAs on the planet for issuing certs to non-qualified domains,
RFC 1918 addresses, duplicate names, you name it...

Peter.

Peter Kurrasch

unread,
Sep 23, 2016, 9:03:01 AM9/23/16
to mozilla-dev-s...@lists.mozilla.org
It's a fair criticism to say that I've not said anything on the implications of distrust but that does not mean I've not considered that at great length. More on that in a moment, but first let me say a few words about my style. Generally I prefer not to waste time on matters that are of little interest to others. This usually means I end up making vague comments or putting ideas out there in order to gauge interest and then go from there, as I have time available. ‎Sometimes people are interested, sometimes not, and sometimes I get a different response entirely. To some extent I understand that and accept the consequences, but there are still times where I wish there could be fewer you-don't-know's and more have-you-considered's. With that said, let's talk about removing trust from root certificates. For my purposes here I've adopted the standpoint of an end user--the Relying Party as it's called in CPS docs. Most CPS docs I've reviewed essentially state that it's up to the relying party to make the ultimate determination of when to trust a cert. So how does a relying party go about making that determination? What steps can the end user take to better inform himself or herself before extending trust? Here are some mechanisms I've been considering: * Presence of the root in a trusted root store: Presumably if a root is included, it's deemed good enough by some and is a "known quantity" (this, in comparison to some self-signed cert that nobody has seen before). If I really wanted to do so, I can always check the root store myself and see that the root is, in fact, listed. * ‎Technical checks: If the certificate checking software successfully validates the cert chain, that is usually all the evidence I need to know I can trust the cert. I can be sure that these checks are performed as best they can be by making sure I'm using the latest version of that software. Still, the software can't check everything (e.g. certs with duplicate serial numbers) so other checks on my part might be necessary. * Revocation: If a particular cert has been revoked for any reason, I should be able to find that out so that I will know not to use it. Ideally this is handled automatically in software but for various reasons it doesn't always work out that way. I'm not sure if the manual tools are all that robust (or exist?), but that almost doesn't matter because I'm dependent either way on the issuer of the cert to issue the proper revocation. In the case of WoSign, there are documented cases where certs were issued improperly. (I'm not sure if we have documented cases where revocations were made improperly?)

* Adherence to Policies: I'm thinking of the Mozilla policies on root inclusion, though others apply equally well. I can take some comfort in the knowledge that they exist and that Mozilla does a good job making sure the CA's continue to be in compliance. In the case of WoSign, we now know that they were in violation of the ownership policy and lied about it for almost a year. That's pretty bad so I really have to question the knowledge, understanding, and commitment that WoSign has in complying with the spirit and the letter of the other Mozilla policies. (Note that I've lost track a bit on where things are with policy compliance and I don't wish to give this any greater bearing than is warranted.)

* Adherence to Specifications: The primary specifications are, of course, the CABF BR's but I imagine there might be certain RFC's that come into play as well. As a relying party, there is little I can do but to trust and presume that the auditing and reporting process ensures that the cert issuer is compliant with the relevant standards, specifications, and best practices. Unfortunately in the case of WoSign, we now know that there are multiple cases of non-compliance *and* we know that the auditors did not identify those problems during the audit process. (I'm not sure if it's clear how much of the blame here belongs to WoSign or the auditing team?)

* Internal Controls: There are, no doubt, policies internal to WoSign (and every CA) regarding their business operation as well as the development, testing, deployment, and maintenance of their various software systems. Typically, as a relying party, the best I can do is trust that the audits have found no major gaps within the CA. Unfortunately in the case of WoSign, we now know that the auditing process was broken so now I have to wonder what gaps might actually exist that have gone undetected, unidentified, and/or unreported. (Again, I've lost track if any issues regarding WoSign systems have been identified, though I personally would include the debacle that StartCom had when they launched their alternative solution to Let's Encrypt--the name of which I've forgotten. The rush to production of an obviously flawed system clearly points to a lack of testing before its deployment.)


So here's the point I wish to make: If I'm presented with a certificate issued by WoSign and I'm told I have to decide for myself if I should trust it, I really don't see how I have any choice but to refuse to trust the cert even though my cert validation software might say it's OK. The above mechanisms available to me as an end user seem to be hopelessly compromised by WoSign's actions over the past 10 or so months.

Admittedly this is a lot to throw out at once and I know I've probably made mistakes or misstatements that don't quite square with the ongoing investigations or "findings of fact", if you will. I've probably overlooked something as well. I hope people will correct me or ask for clarifications where needed.

I also recognize that I've still said nothing about the impacts that removing trust has on the cert holders (website operators, mostly). Doing so is a deliberate move on my part just to keep this particular message of manageable size and scope. I'd be happy to discuss those impacts at another time.

Finally, I fully acknowledge that any decision to remove trust from a root certificate creates a significant disruption to the PKI ecosystem and the Internet as a whole. It is not an action to be taken capriciously but only after very careful consideration and thorough discussion.‎ It's good that we have had and continue to have these discussions here in this forum.


________________________________________________________________________
From: Gervase Markham <ge...@mozilla.org>


Jakob Bohm

unread,
Sep 23, 2016, 9:03:44 AM9/23/16
to mozilla-dev-s...@lists.mozilla.org
That wasn't my point.

My point was that the categories I listed probably contains lots of
in-use valid and correctly issued certificates that would need to be
included in any white-listing mechanism.

Thus the size of any "trust table" or "trust name tree" etc. would need
to include space to preserve the validity of those certificates too,
especially for the cases where the relevant mechanism is used in
something other than Firefox and Chrome. For example Mozilla
Thunderbird uses the Mozilla root CA list and related NSS code to check
mail server TLS certificates and e-mail signature/encryption
certificates. Non-mozilla projects such as the Debian Linux
distribution uses the Mozilla root CA list as the main source of its
list of certificates for *all purposes*, not just TLS and e-mail.

I am aware of at least one non-Mozilla Browser which still uses NSS
code and the Mozilla root CA list for code signing certificates.
Releases of that browser itself are signed with valid StartCom OV code
signing certificates.

Rob Stradling

unread,
Sep 23, 2016, 11:19:26 AM9/23/16
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On 22/09/16 18:48, Jakob Bohm wrote:
<snip>
> While you are at it:
>
> 1. How many WoSign/StartCom certificates did you find with domains not
> on that IANA list?

Hi Jakob. I wasn't looking for this sort of thing, because Gerv was
only interested in "unique base domains (PSL+1)".

I think there were ~200 internationalized domain names amongst the certs
issued by StartCom, of which about half have internationalized TLDs. I
ignored all of these, on the assumption that the Punycode representation
of each would also be in the cert.

BTW, I also found certs containing the following public suffixes (i.e.,
PSL+0), some of which may be of interest:

WoSign:
cloudapp.net
github.io
qa2.com
kuzbass.ru

StartCom:
astrakhan.ru
chirurgiens-dentistes-en-france.fr
(and *.chirurgiens-dentistes-en-france.fr)
chita.ru
(and *.chita.ru)
duckdns.org
goip.de
gouv.ci
gov.sc
ivanovo.ru
karelia.ru
lipetsk.ru
logoip.com
logoip.de
net.tj
nsupdate.info
realm.cz
sandcats.io
tsk.ru
uem.mz

> 2. How many WoSign/StartCom certificates did you find for other uses
> than https://www.example.tld:
>
> 2.1 Certificates for "odd" subdomains such as "extranet.example.com"

How do you algorithmically determine "odd" ?

> 2.2 Certificates for e-mail
>
> 2.3 Code signing certificates
>
> 2.4 Others?

I only looked for CNs, dNSNames and iPAddresses. Are these other types
of cert of particular interest for some reason?

Ryan Sleevi

unread,
Sep 23, 2016, 11:27:30 AM9/23/16
to mozilla-dev-s...@lists.mozilla.org
On Friday, September 23, 2016 at 6:03:01 AM UTC-7, Peter Kurrasch wrote:

> * Revocation: If a particular cert has been revoked for any reason, I should be able to find that out so that I will know not to use it. Ideally this is handled automatically in software but for various reasons it doesn't always work out that way. I'm not sure if the manual tools are all that robust (or exist?), but that almost doesn't matter because I'm dependent either way on the issuer of the cert to issue the proper revocation. In the case of WoSign, there are documented cases where certs were issued improperly. (I'm not sure if we have documented cases where revocations were made improperly?)

Note: In pretty much all major software, automatic revocation doesn't work under an adversarial threat model. That is, you can consider two types of misissuance: Procedural misissuance (perhaps it states Yahoo rather than Google) and adversarial misissuance (that is, someone attempting to impersonate). I'm handwaving a bit here, because it's more of a spectrum, but we might broadly lump it as "stupidity" that causes insecurity and "evil" that causes insecurity.

Stupidity is important/relevant to trust decisions, to some extent. However, in practice, I doubt you're examining each certificate presented to you on each connection *before* you send data on that connection, which is what the CPS's are worded to suggestion, so it's unlikely you're making major trust decisions on the basis that the certificate says it was for "Yahoo"

So you're most likely concerned about 'evil' misissuance - those that an adversarial attacker is attempting to gain access to your private communications. And it's under this model that revocation doesn't work, due to the attacker's ability to perform various exploits (such as blocking communication to revocation servers, or stapling an OCSP GOOD response).

For this reason, I would encourage you not to think of Revocation as an important factor of trust. I agree that you want to know if a certificate is revoked in the abstract, and if we have reliable solutions that can help facilitate that (e.g. OneCRL or CRLSets are more reliable means of revocation), then great, but we're not at a place yet where revocation is an intrinsic part of the secure connection establishment.

>
> So here's the point I wish to make: If I'm presented with a certificate issued by WoSign and I'm told I have to decide for myself if I should trust it, I really don't see how I have any choice but to refuse to trust the cert even though my cert validation software might say it's OK. The above mechanisms available to me as an end user seem to be hopelessly compromised by WoSign's actions over the past 10 or so months.

You missed some other additional characteristics to consider, and ones that I think significantly affect the conclusions made.

The first is a consideration of risk. The decision to trust or distrust - and this isn't just among CAs or certificates - is more than just a binary yes or no. It's a spectrum that varies with the risk involved. For example, if you said I had a 1% of getting a papercut when doing some action, I'd probably be willing to entertain that risk, but if you were to say I had a 1% risk of dying, then I'd consider that a very risky thing indeed!

So the trust of a certificate, even in isolation (that is, not considering the entire corpus of certificates out there) is weighted on a spectrum. For example, if you had a domain whitelist, you would know that you only engage in risky behaviour if you access a domain on that whitelist (as your first mitigator of risk), but then you also have to consider what services the site provides or what actions you're going to engage in (as the second mitigator of risk) when evaluating that trust decision.

The other important factor that you're not considering is transparency. You're assuming that you're having to only operate on the knowledge presented in front of you with the certificate, and using only the knowledge you personally have. However, when things are disclosed - such as via Certificate Transparency - you need to factor in the other knowledge that may be had by other parties. For example, an important party you're not considering is the domain holder - if all the certificates are actually logged, or at least all the certificates you might be asked to trust - then the domain holder has an opportunity to speak out against any sort of misissuance. Similarly, you can trust that the certificates are only valid for *that* domain - if there were certificates that were able to facilitate abuse, you could trust that the community of technical experts here would have alerted to such risk.

I hope this explains why your considerations of trustworthiness - all important conditions I believe - may not be taking in the full picture, and it's within that full picture, and with consideration of the risk, that there are alternatives being proposed.

Jakob Bohm

unread,
Sep 23, 2016, 12:15:48 PM9/23/16
to mozilla-dev-s...@lists.mozilla.org
On 23/09/2016 17:27, Ryan Sleevi wrote:
> On Friday, September 23, 2016 at 6:03:01 AM UTC-7, Peter Kurrasch wrote:
>
>> * Revocation: If a particular cert has been revoked for any reason, I should be able to find that out so that I will know not to use it. Ideally this is handled automatically in software but for various reasons it doesn't always work out that way. I'm not sure if the manual tools are all that robust (or exist?), but that almost doesn't matter because I'm dependent either way on the issuer of the cert to issue the proper revocation. In the case of WoSign, there are documented cases where certs were issued improperly. (I'm not sure if we have documented cases where revocations were made improperly?)
>
> Note: In pretty much all major software, automatic revocation doesn't work under an adversarial threat model. That is, you can consider two types of misissuance: Procedural misissuance (perhaps it states Yahoo rather than Google) and adversarial misissuance (that is, someone attempting to impersonate). I'm handwaving a bit here, because it's more of a spectrum, but we might broadly lump it as "stupidity" that causes insecurity and "evil" that causes insecurity.
>
> Stupidity is important/relevant to trust decisions, to some extent. However, in practice, I doubt you're examining each certificate presented to you on each connection *before* you send data on that connection, which is what the CPS's are worded to suggestion, so it's unlikely you're making major trust decisions on the basis that the certificate says it was for "Yahoo"
>
> So you're most likely concerned about 'evil' misissuance - those that an adversarial attacker is attempting to gain access to your private communications. And it's under this model that revocation doesn't work, due to the attacker's ability to perform various exploits (such as blocking communication to revocation servers, or stapling an OCSP GOOD response).
>
> For this reason, I would encourage you not to think of Revocation as an important factor of trust. I agree that you want to know if a certificate is revoked in the abstract, and if we have reliable solutions that can help facilitate that (e.g. OneCRL or CRLSets are more reliable means of revocation), then great, but we're not at a place yet where revocation is an intrinsic part of the secure connection establishment.
>

While Firefox in particular has had a *horrible* history of disabling
or not implementing basic revocation checks (case in point: at least
until recently, Firefox would *not* check CA issued CRLs automatically,
even though the URL to check is listed in the certificate being checked
and the renewal time is listed in the CRL itself), other browsers tend
to do them correctly, and they are nowhere as bad as proponents of
extreme centralization schemes claim.

For example OCSP stapled responses cannot be reused or abused beyond
their CA specified expiry times, while CA issued CRLs and delta CRLs
cannot be used beyond their scheduled expiry times. To bypass these
mechanisms an attacker would have to somehow manipulate the relying
party's clock and/or a trusted Time Stamping Authority. Or the
attacker could choose a CA with too long expiry times on their CRLs and
OCSP responses.

Mechanisms such as OneCRL tend to be horribly incomplete. Just in the
past few months there has been repeated mention on this list of revoked
certificates that were not on OneCRL, only on the CA CRLs.

Jakob Bohm

unread,
Sep 23, 2016, 12:31:14 PM9/23/16
to mozilla-dev-s...@lists.mozilla.org
On 23/09/2016 17:18, Rob Stradling wrote:
> On 22/09/16 18:48, Jakob Bohm wrote:
> <snip>
>> While you are at it:
>>
>> 1. How many WoSign/StartCom certificates did you find with domains not
>> on that IANA list?
>
> Hi Jakob. I wasn't looking for this sort of thing, because Gerv was
> only interested in "unique base domains (PSL+1)".
>

However for the relevant technique to be workable, it would have to
include "unique base domains" outside the IANA root (such as base
domains under alternative DNS roots). Algorithmically, any DNS name
found in certificates but not on the IANA suffix list should be treated
generically (e.g. assume only last component is a public suffix, or
assume any 1 to 3 letter 2. level domain is also a public suffix).
Anything your script would otherwise throw away as not matching its
assumptions.

>> 2.2 Certificates for e-mail
>>
>> 2.3 Code signing certificates
>>
>> 2.4 Others?
>
> I only looked for CNs, dNSNames and iPAddresses. Are these other types
> of cert of particular interest for some reason?
>

As I said elsewhere:

2.2: Mozilla also makes an e-mail client (Thunderbird) which uses the
same CA root list and the same NSS security library to check e-mail
certificates. E-mail trust bits are still part of the Mozilla CA root
database.

2.3: Some non-Mozilla projects still use the Mozilla CA root list to
check code and document signatures, because the Mozilla CA root program
is the only major CA root program run in an open source fashion. Thus
the discussions on this mailing list would tend to inform the
maintainers of some of those projects regarding their setting of code
signing trust bits.

2.4: If the CT logs reveal any kind of certificate I did not ask about,
that would indicate that those things exist and have some relevance.

Ryan Sleevi

unread,
Sep 23, 2016, 12:41:06 PM9/23/16
to mozilla-dev-s...@lists.mozilla.org
On Friday, September 23, 2016 at 9:31:14 AM UTC-7, Jakob Bohm wrote:
> 2.2: Mozilla also makes an e-mail client (Thunderbird) which uses the
> same CA root list and the same NSS security library to check e-mail
> certificates. E-mail trust bits are still part of the Mozilla CA root
> database.

That is, but there's no set of industry policies with respect to e-mail certificates, there's no need (and plenty of reason not to) log e-mail certificates to CT logs, there is no profile of email certificates, and there is no participation from Thunderbird maintainers.

As with below, you are raising a concern that, however accurate, because of the realities of the situation have little to no bearing, on a practical matter, in the discussion.

> 2.3: Some non-Mozilla projects still use the Mozilla CA root list to
> check code and document signatures, because the Mozilla CA root program
> is the only major CA root program run in an open source fashion. Thus
> the discussions on this mailing list would tend to inform the
> maintainers of some of those projects regarding their setting of code
> signing trust bits.

As has been repeatedly mentioned, those other applications are out of scope, the application developers and maintainers do not participate in these discussions, and so while your affected parties certainly exist, there's nothing this community can or should do further with respect to this.

That is, as with any project, you can't say to upstream "Don't change this, this will break downstream", if downstream is not involved and participating in the discussions. If Downstream wants to avoid breakage, downstream should work with upstream.

Ryan Sleevi

unread,
Sep 23, 2016, 12:46:55 PM9/23/16
to mozilla-dev-s...@lists.mozilla.org
On Friday, September 23, 2016 at 9:15:48 AM UTC-7, Jakob Bohm wrote:
>they are nowhere as bad as proponents of
> extreme centralization schemes claim.

Citation needed. It would seem that you're not familiar with the somewhat well-accepted industry state of the art.

It would perhaps be useful if you could dispute, using Firefox as an example, and considering the real deployment (not the theorhetical abstract of ways in which someone 'might' configure about:flags, but no one can and still have the same experience), the following points:

https://www.imperialviolet.org/2011/03/18/revocation.html
https://www.imperialviolet.org/2012/02/05/crlsets.html
https://www.imperialviolet.org/2014/04/29/revocationagain.html

>
> For example OCSP stapled responses cannot be reused or abused beyond
> their CA specified expiry times,

No, but they can be omitted, and no client hard fails on the absence of OCSP.

Similarly, fetched OCSP can be blocked, under an adversarial model.

I cannot stress enough: discussions of revocation schemes require a model of the attacker or the threat to have relevant discussions. Abstract notions, however attractive, must be intersected with practical reality.

> while CA issued CRLs and delta CRLs
> cannot be used beyond their scheduled expiry times. To bypass these
> mechanisms an attacker would have to somehow manipulate the relying
> party's clock and/or a trusted Time Stamping Authority. Or the
> attacker could choose a CA with too long expiry times on their CRLs and
> OCSP responses.

No. They just prevent them from being delivered. Which is trivial.

Gervase Markham

unread,
Sep 26, 2016, 5:08:16 AM9/26/16
to Rob Stradling
Hi Rob,

On 23/09/16 16:18, Rob Stradling wrote:
> BTW, I also found certs containing the following public suffixes (i.e.,
> PSL+0), some of which may be of interest:

Are these in the PUBLIC or PRIVATE section of the PSL? CAs are, with
appropriate caution, not constrained from issuing certificates for PSL
entries in the PRIVATE section. (E.g. Google may want to provide SSL to
all appspot apps with a *.appspot.com certificate.)

Gerv

Gervase Markham

unread,
Sep 26, 2016, 5:12:43 AM9/26/16
to Jakob Bohm
On 23/09/16 17:15, Jakob Bohm wrote:
> Mechanisms such as OneCRL tend to be horribly incomplete. Just in the
> past few months there has been repeated mention on this list of revoked
> certificates that were not on OneCRL, only on the CA CRLs.

OneCRL is not intended to be a comprehensive list of all revoked
certificates in the world. The focus is on revoked intermediates, plus
also perhaps some high-profile misissuances of end-entity certificates.

So the ".sb" certificate, for example, probably won't be added to OneCRL
because the person who has the private key came to tell us about it
rather than attempting to misuse it, and it's not at all clear how it
could be meaningfully misused anyway.

Gerv

Rob Stradling

unread,
Sep 26, 2016, 6:15:09 AM9/26/16
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On 26/09/16 10:07, Gervase Markham wrote:
> Hi Rob,
>
> On 23/09/16 16:18, Rob Stradling wrote:
>> BTW, I also found certs containing the following ICANN suffixes (i.e.,
>> PSL+0), some of which may be of interest:
>
> Are these in the PUBLIC or PRIVATE section of the PSL?

(s/PUBLIC/ICANN)

A mixture...

WoSign:
PRIVATE: cloudapp.net
PRIVATE: github.io
PRIVATE: qa2.com
ICANN: kuzbass.ru

StartCom:
ICANN: astrakhan.ru
PRIVATE: chirurgiens-dentistes-en-france.fr
PRIVATE: (and *.chirurgiens-dentistes-en-france.fr)
ICANN: chita.ru
ICANN: (and *.chita.ru)
PRIVATE: duckdns.org
PRIVATE: goip.de
ICANN: gouv.ci
ICANN: gov.sc
ICANN: ivanovo.ru
ICANN: karelia.ru
ICANN: lipetsk.ru
PRIVATE: logoip.com
PRIVATE: logoip.de
ICANN: net.tj
PRIVATE: nsupdate.info
PRIVATE: realm.cz
PRIVATE: sandcats.io
ICANN: tsk.ru
ICANN: uem.mz

> CAs are, with
> appropriate caution, not constrained from issuing certificates for PSL
> entries in the PRIVATE section. (E.g. Google may want to provide SSL to
> all appspot apps with a *.appspot.com certificate.)
>
> Gerv

Gervase Markham

unread,
Sep 26, 2016, 7:14:39 AM9/26/16
to mozilla-dev-s...@lists.mozilla.org
On 26/09/16 11:14, Rob Stradling wrote:
> ICANN: kuzbass.ru

There are several .ru in your list; we should check whether the PSL is
actually accurate. I think they opened up a lot of previously-reserved
domains a while back, but it's hard to find the right records.

These are the non-RU entries:

> ICANN: gouv.ci
> ICANN: gov.sc

Not a silly idea that governments may want certs for these, but neither
site is using such a cert.

> ICANN: net.tj

Redirects to com.tj, with a quote - domain speculator, PSL error?

> ICANN: uem.mz

A university in Mexico - PSL error?

Gerv


Rob Stradling

unread,
Sep 26, 2016, 7:26:16 AM9/26/16
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
Who determines whether or not the PSL is accurate? Does common sense
ever override the explicitly stated will of the TLD operator?

(BTW, just to be clear: I wasn't alleging, or even speculating, that the
certs containing dNSNames for these public suffices were necessarily
misissued. I only wanted to point out that they weren't on the list of
base domains (PSL+1) that I generated)

Simone Carletti

unread,
Sep 26, 2016, 7:34:17 AM9/26/16
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On Mon, Sep 26, 2016 at 1:14 PM, Gervase Markham <ge...@mozilla.org> wrote:

> There are several .ru in your list; we should check whether the PSL is
> actually accurate. I think they opened up a lot of previously-reserved
> domains a while back, but it's hard to find the right records.
>

.RU entries need a cleanup. The relevant issue is
https://github.com/publicsuffix/list/issues/206 and
https://github.com/publicsuffix/list/issues/43

We didn't move forward as we were stuck in deciding the best approach to
handle them. My proposal was to remove all of them except the ones
officially provided by the registry
https://github.com/publicsuffix/list/issues/206#issuecomment-213385921

Redirects to com.tj, with a quote - domain speculator, PSL error?


This is actually quite confusing. The .TJ registry doesn't mention the use
of suffixes
http://www.nic.tj/policy.html
http://www.nic.tj/policy4.html

However, both com.tj and net.tj are not delegated to the nic (according to
the SOA record) and they return an empty result in the whois. Different
story for the www (which delegates to an IP, and have a whois entry).

My assumption is that .com.tj and .net.tj are indeed suffixes, and someone
managed to registered the www domain under those suffixes.
I can try to get in touch with the .TJ registry to have a confirmation of
the suffixes.

A university in Mexico - PSL error?


MZ is actually Mozambique, MX is Mexico.

-- Simone

--
Simone Carletti
Passionate programmer and dive instructor

http://simonecarletti.com/
Twitter: @weppos <https://twitter.com/weppos> - Web: simone.io

Gervase Markham

unread,
Sep 26, 2016, 10:43:46 AM9/26/16
to Rob Stradling
On 26/09/16 12:25, Rob Stradling wrote:
> Who determines whether or not the PSL is accurate? Does common sense
> ever override the explicitly stated will of the TLD operator?

Normally no, not for the explicitly-stated will (e.g. an email to us).
It might perhaps override a random policy document, and of course if a
suffix has been retired from new registrations it may not appear in
lists on the registry website but may need to be a Public Suffix
nonetheless.

> (BTW, just to be clear: I wasn't alleging, or even speculating, that the
> certs containing dNSNames for these public suffices were necessarily
> misissued. I only wanted to point out that they weren't on the list of
> base domains (PSL+1) that I generated)

No, indeed. But it's useful to examine the cases.

Gerv

Peter Kurrasch

unread,
Sep 26, 2016, 6:53:25 PM9/26/16
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
The actual revocation model I had in mind was a more friendly one where the cert holder wants it revoked for some reason. This was based on your research which showed that many ‎WoSign clients were not using their WoSign-issued certs. (I don't remember if you indicated they had switched to a different cert issuer/provider.)

My thinking was that if we put together a white list, the idea is that those on the whitelist would migrate to a different CA over time. As that process takes place, presumably the WoSign certs would be revoked. So the question is, assuming those clients do bother to request revocation, can we trust that WoSign will honor those requests?

Certificate Transparency is something I should have included in my original list so thanks for bringing it up. I have a question though about what happens if WoSign fails to publish a cert to the CT logs: can they get away with it, and does the answer change if the user is on one side of the Great Firewall or the other?

Finally, I have no problem with the exploration of alternatives to outright distrust. That said, I did want to help flesh out how we might know when distrust is the right decision to make and what that might look like--beyond the obvious impact it has to current cert holders.


  Original Message  
From: Ryan Sleevi
Sent: Friday, September 23, 2016 10:27 AM
To: mozilla-dev-s...@lists.mozilla.org
Subject: Re: Time to distrust

On Friday, September 23, 2016 at 6:03:01 AM UTC-7, Peter Kurrasch wrote:

> * Revocation: If a particular cert has been revoked for any reason, I should be able to find that out so that I will know not to use it. Ideally this is handled automatically in software but for various reasons it doesn't always work out that way. I'm not sure if the manual tools are all that robust (or exist?), but that almost doesn't matter because I'm dependent either way on the issuer of the cert to issue the proper revocation. In the case of WoSign, there are documented cases where certs were issued improperly. (I'm not sure if we have documented cases where revocations were made improperly?)

Note: In pretty much all major software, automatic revocation doesn't work under an adversarial threat model. That is, you can consider two types of misissuance: Procedural misissuance (perhaps it states Yahoo rather than Google) and adversarial misissuance (that is, someone attempting to impersonate). I'm handwaving a bit here, because it's more of a spectrum, but we might broadly lump it as "stupidity" that causes insecurity and "evil" that causes insecurity.

Stupidity is important/relevant to trust decisions, to some extent. However, in practice, I doubt you're examining each certificate presented to you on each connection *before* you send data on that connection, which is what the CPS's are worded to suggestion, so it's unlikely you're making major trust decisions on the basis that the certificate says it was for "Yahoo"

So you're most likely concerned about 'evil' misissuance - those that an adversarial attacker is attempting to gain access to your private communications. And it's under this model that revocation doesn't work, due to the attacker's ability to perform various exploits (such as blocking communication to revocation servers, or stapling an OCSP GOOD response).

For this reason, I would encourage you not to think of Revocation as an important factor of trust. I agree that you want to know if a certificate is revoked in the abstract, and if we have reliable solutions that can help facilitate that (e.g. OneCRL or CRLSets are more reliable means of revocation), then great, but we're not at a place yet where revocation is an intrinsic part of the secure connection establishment.

>
> So here's the point I wish to make: If I'm presented with a certificate issued by WoSign and I'm told I have to decide for myself if I should trust it, I really don't see how I have any choice but to refuse to trust the cert even though my cert validation software might say it's OK. The above mechanisms available to me as an end user seem to be hopelessly compromised by WoSign's actions over the past 10 or so months.

You missed some other additional characteristics to consider, and ones that I think significantly affect the conclusions made.

The first is a consideration of risk. The decision to trust or distrust - and this isn't just among CAs or certificates - is more than just a binary yes or no. It's a spectrum that varies with the risk involved. For example, if you said I had a 1% of getting a papercut when doing some action, I'd probably be willing to entertain that risk, but if you were to say I had a 1% risk of dying, then I'd consider that a very risky thing indeed!

So the trust of a certificate, even in isolation (that is, not considering the entire corpus of certificates out there) is weighted on a spectrum. For example, if you had a domain whitelist, you would know that you only engage in risky behaviour if you access a domain on that whitelist (as your first mitigator of risk), but then you also have to consider what services the site provides or what actions you're going to engage in (as the second mitigator of risk) when evaluating that trust decision.

The other important factor that you're not considering is transparency. You're assuming that you're having to only operate on the knowledge presented in front of you with the certificate, and using only the knowledge you personally have. However, when things are disclosed - such as via Certificate Transparency - you need to factor in the other knowledge that may be had by other parties. For example, an important party you're not considering is the domain holder - if all the certificates are actually logged, or at least all the certificates you might be asked to trust - then the domain holder has an opportunity to speak out against any sort of misissuance. Similarly, you can trust that the certificates are only valid for *that* domain - if there were certificates that were able to facilitate abuse, you could trust that the community of technical experts here would have alerted to such risk.

I hope this explains why your considerations of trustworthiness - all important conditions I believe - may not be taking in the full picture, and it's within that full picture, and with consideration of the risk, that there are alternatives being proposed.

Jakob Bohm

unread,
Sep 26, 2016, 7:19:02 PM9/26/16
to mozilla-dev-s...@lists.mozilla.org
On 23/09/2016 18:46, Ryan Sleevi wrote:
> On Friday, September 23, 2016 at 9:15:48 AM UTC-7, Jakob Bohm wrote:
>> they are nowhere as bad as proponents of
>> extreme centralization schemes claim.
>
> Citation needed. It would seem that you're not familiar with the somewhat well-accepted industry state of the art.
>
> It would perhaps be useful if you could dispute, using Firefox as an example, and considering the real deployment (not the theorhetical abstract of ways in which someone 'might' configure about:flags, but no one can and still have the same experience), the following points:
>
> https://www.imperialviolet.org/2011/03/18/revocation.html

This tells me that Firefox OCSP defaults are *insecure* and reaffirms
my impression that Firefox has completely dropped the ball on CRL
handling (Since the security-on setting is for OCSP only).

It also claims (with apparent evidence) that other browsers are
similarly lenient by default, which is a surprise.

> https://www.imperialviolet.org/2012/02/05/crlsets.html

A nice example of the centralized thinking I was criticizing. For
example, your list of criteria for 3rd party CRL inclusion seems to
(as is typical of Google) impose Google-centric demands on CAs, while
arbitrarily excluding CRLs that try to revoke certificates that had
misencoded serial numbers in them.

> https://www.imperialviolet.org/2014/04/29/revocationagain.html
>

Nice rant.

>>
>> For example OCSP stapled responses cannot be reused or abused beyond
>> their CA specified expiry times,
>
> No, but they can be omitted, and no client hard fails on the absence of OCSP.
>
> Similarly, fetched OCSP can be blocked, under an adversarial model.
>
> I cannot stress enough: discussions of revocation schemes require a model of the attacker or the threat to have relevant discussions. Abstract notions, however attractive, must be intersected with practical reality.
>

My point would be that I disagree with your assessment that being
lenient to failures to reach revocation URLs is acceptable browser
practice and your associated arguments about popularity effects.

For example, rather than arguing that CA-side url failure would induce
web site distrust in HTTPS seems to ignore the possibility it would
just induce similar distrust in the failed CA. Thus providing a
massive incentive for CAs to make their revocation distribution system
robust.

The revocation by non-payment of installments of Symantec is not much
of an argument, once you notice the counter-effects caused by the
reduced maximum certificate lifetime in new CA/B rules and the
emergence of cheap certificate providers such as Let's encrypt as an
alternative to buying certificates in installments.


>> while CA issued CRLs and delta CRLs
>> cannot be used beyond their scheduled expiry times. To bypass these
>> mechanisms an attacker would have to somehow manipulate the relying
>> party's clock and/or a trusted Time Stamping Authority. Or the
>> attacker could choose a CA with too long expiry times on their CRLs and
>> OCSP responses.
>
> No. They just prevent them from being delivered. Which is trivial.
>

My arguments presumed secure browser defaults, not the "ignore failures
and pretend all is still secure" crap coding you have both uncovered
and worsened (by suggesting to turn off the check completely in Chrome).

Kurt Roeckx

unread,
Sep 27, 2016, 3:32:36 AM9/27/16
to mozilla-dev-s...@lists.mozilla.org
On 2016-09-27 01:18, Jakob Bohm wrote:
>> It would perhaps be useful if you could dispute, using Firefox as an
>> example, and considering the real deployment (not the theorhetical
>> abstract of ways in which someone 'might' configure about:flags, but
>> no one can and still have the same experience), the following points:
>>
>> https://www.imperialviolet.org/2011/03/18/revocation.html
>
> This tells me that Firefox OCSP defaults are *insecure* and reaffirms
> my impression that Firefox has completely dropped the ball on CRL
> handling (Since the security-on setting is for OCSP only).
>
> It also claims (with apparent evidence) that other browsers are
> similarly lenient by default, which is a surprise.

You should really try and set the OCSP check to mandatory, connect on
many different networks, and then see how many times it breaks.

For instance when you connect to a captive portal and it does https it's
very likely that the OCSP check will fail.

We really want OCSP stapling.


Kurt

Jakob Bohm

unread,
Sep 27, 2016, 4:59:39 AM9/27/16
to mozilla-dev-s...@lists.mozilla.org
On 27/09/2016 09:31, Kurt Roeckx wrote:
> On 2016-09-27 01:18, Jakob Bohm wrote:
>>> It would perhaps be useful if you could dispute, using Firefox as an
>>> example, and considering the real deployment (not the theorhetical
>>> abstract of ways in which someone 'might' configure about:flags, but
>>> no one can and still have the same experience), the following points:
>>>
>>> https://www.imperialviolet.org/2011/03/18/revocation.html
>>
>> This tells me that Firefox OCSP defaults are *insecure* and reaffirms
>> my impression that Firefox has completely dropped the ball on CRL
>> handling (Since the security-on setting is for OCSP only).
>>
>> It also claims (with apparent evidence) that other browsers are
>> similarly lenient by default, which is a surprise.
>
> You should really try and set the OCSP check to mandatory, connect on
> many different networks, and then see how many times it breaks.
>

I don't have access to that many networks, the ones I mostly use are
set up to allow OCSP and CRL checks against the CA urls.

> For instance when you connect to a captive portal and it does https it's
> very likely that the OCSP check will fail.
>

If a captive portal prevents the browser from checking if the captive
portal's own certificate is revoked, then the browser should tell the
user (so the user can decide), not pretend that all is OK.

> We really want OCSP stapling.
>
>

Unfortunately, you are not going to get it anytime soon, and users need
to be protected from revocation interference 20 years ago (when SSL was
introduced by Netscape), not in some future wishful scenario.

Also note that automatic CRL downloading before expiry (as specified in
the CRLs themselves) would be robust against temporary outages because
they don't become invalid until some time after the browsers should
start attempted downloads.

Peter Gutmann

unread,
Sep 27, 2016, 7:50:37 AM9/27/16
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
Jakob Bohm <jb-mo...@wisemo.com> writes:

>This tells me that Firefox OCSP defaults are *insecure* and reaffirms my
>impression that Firefox has completely dropped the ball on CRL handling
>(Since the security-on setting is for OCSP only).

No, it tells me that the Firefox developers applied common sense (OK, the
people doing Firefox *crypto* applied common sense, the people doing the
Firefox UI are another story altogether). It's also not much different from
what Chrome and others are doing.

Revocation checking is one of the places where PKI theory has to confront
reality, and comes in for a rude shock. It's a cost/benefit tradeoff, CRL
checking for general sites is pretty much pointless and has a high cost, non-
stapled OCSP the same. For high-value certs like CAs it may be worth it, or
at least creating the impression you're doing something may give you warm
fuzzies so it could be worth doing.

So what Firefox and Chrome and others are doing is simply acknowledging
practical reality.

Peter.

Gijs Kruitbosch

unread,
Sep 27, 2016, 8:45:29 AM9/27/16
to Peter Gutmann
(With apologies for the off-topic drift)

On 27/09/2016 12:49, Peter Gutmann wrote:
> Jakob Bohm <jb-mo...@wisemo.com> writes:
>
>> This tells me that Firefox OCSP defaults are *insecure* and reaffirms my
>> impression that Firefox has completely dropped the ball on CRL handling
>> (Since the security-on setting is for OCSP only).
>
> No, it tells me that the Firefox developers applied common sense (OK, the
> people doing Firefox *crypto* applied common sense, the people doing the
> Firefox UI are another story altogether).

(Some) People who "do" Firefox UI read this group. If you have
concrete/constructive suggestions, please file bugs or write to more
topical mailing lists - especially if you think there are things we
should do "frontend"-wise to improve the security of end users.

~ Gijs

Peter Gutmann

unread,
Sep 29, 2016, 12:49:03 AM9/29/16
to Gijs Kruitbosch, mozilla-dev-s...@lists.mozilla.org
Gijs Kruitbosch <gijskru...@gmail.com> writes:

>(Some) People who "do" Firefox UI read this group. If you have concrete/
>constructive suggestions, please file bugs or write to more topical mailing
>lists - especially if you think there are things we should do "frontend"-
>wise to improve the security of end users.

Oh, it's not the security UI, it's the look and feel of Firefox as a whole,
which has seen almost uniformly negative response from users in public
forums for several years now (Mozilla's own Firefox feedback forum was
running about 80-90% negative the last time I checked a link to it). Just
to pick one random location, go to Slashdot and find any thread on Firefox,
anything at all, and try and find anyone with a positive comment to make
about it. What I was commenting on was that what the Firefox *security*
devs were doing made perfect sense, it wasn't meant to start yet another
Firefox-post-3.x-sucks thread, they're all over the place as it is.

Peter.

0 new messages