Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Mozilla CT Policy

864 views
Skip to first unread message

Gervase Markham

unread,
Nov 4, 2016, 8:20:11 AM11/4/16
to mozilla-dev-s...@lists.mozilla.org
CT is coming to Firefox. As part of that, Mozilla needs to have a set of
CT policies surrounding how that will work. Like our root inclusion
program, we intend to run our CT log inclusion program in an open and
transparent fashion, such that the Internet community can see how it
works and how decisions are made. (It is quite possible that, like our
root program, other entities without the resources to run their own
programs might adopt our decisions.)

This policy will need to consider at least the following questions. The
point of this posting is to gather more _questions_, not to work out the
answers. In other words, I am trying to work out the scope of the
policy, not what the policy will be.

So, please add comments with additional _questions_ you think the policy
will need to address. What the answers should be is (for now) off-topic.

Questions I have so far:

* How do we decide which logs to trust?

* Do we have requirements for uptime?
* Do we have requirements for certs accepted?
* Do we have requirements for the MMD?

* How do we decide when to un-trust a log? What reasons are valid
reasons for doing so?

* Do we want to put monitoring in place to ensure our log quality or
uptime requirements are met?

* Are there any CT-related services Mozilla should consider running or
supporting, for the good of the ecosystem?

* Do we want to require a certain number of SCTs for certificates of
particular validity periods?

* Do we want the Google/non-Google diversity requirement? Or any other
diversity reqirement?

* Which certs, if any, should we require CT for, and when?

* Do we want to allow some CAs to opt into CT before those dates?

* Do we want to require some CAs to do CT before those dates?

Gerv

Hanno Böck

unread,
Nov 4, 2016, 8:32:23 AM11/4/16
to dev-secur...@lists.mozilla.org
Hi,

Great to see Mozilla committing to CT.

On Fri, 4 Nov 2016 12:19:32 +0000
Gervase Markham <ge...@mozilla.org> wrote:

> So, please add comments with additional _questions_ you think the
> policy will need to address. What the answers should be is (for now)
> off-topic.

Some meta-thought:
In practice pretty much every webpage wants to be compliant with all
major browsers. It's hard to imagine anyone saying "I'll comply with
Chrome's CT requirements, but not with Mozilla's" (or vice versa).

Therefore practically the "real" CT requirements will be all
requirements combined. It also probably means that diverity in CT
requirements between different browsers doesn't make a whole lot of
sense.

So one could ask: Should mozilla just say "we agree with everything
Chrome does" ?

--
Hanno Böck
https://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: FE73757FA60E4E21B937579FA5880072BBB51E42

Jakob Bohm

unread,
Nov 4, 2016, 9:10:33 AM11/4/16
to mozilla-dev-s...@lists.mozilla.org
* How do we allow organization internal non-public CAs to not reveal
their secret membership/server lists to public CT systems or otherwise
run the (administratively and technically) expensive processes
required of public CAs. For example many medium or large companies
have in-house CAs issuing certificates for communicating with their
internal servers, VPNs, extranets etc. Such internal CAs may very
from primitive off-line CAs (no online active components such as OCSP
responders or CT loggers) to off-the-shelf enterprise CA packages such
as Microsoft Windows Server Certificate Services, xca or EJBCA.

* How do we prevent public CAs from misusing the exceptions for private
CAs?

* Even though not currently accepted (surprise) by the advertising
giant Google, should Mozilla set or promote standards for acceptable
CT privacy options such as name truncation to first level below public
suffixes, omission of the local part of e-mail addresses of accounts
other than the RFCxxxx standard mailboxes (Postmaster, webmaster,
hostmaster etc.) etc.?

* Should Mozilla impose be a multi-national diversity requirement, e.g.
that the CT services used must not belong (directly or via ownership
etc.) to a single national jurisdiction such as USA or PRC. For
example if one CT log is run by Mozilla or Google (bot US
organizations), should there be at least one CT from a staunchly
independent country and organization, such as a South African owned CT
log hosted in India?

* Should the CT logs be independent of the issuing CA (e.g.
Symantec/Thawte can run a CT log, but it only counts for certificates
from other CAs).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Hanno Böck

unread,
Nov 4, 2016, 10:43:36 AM11/4/16
to dev-secur...@lists.mozilla.org
On Fri, 4 Nov 2016 14:09:55 +0100
Jakob Bohm <jb-mo...@wisemo.com> wrote:

> * How do we allow organization internal non-public CAs to not reveal
> their secret membership/server lists to public CT systems or
> otherwise run the (administratively and technically) expensive
> processes required of public CAs. For example many medium or large
> companies have in-house CAs issuing certificates for communicating
> with their internal servers, VPNs, extranets etc. Such internal CAs
> may very from primitive off-line CAs (no online active components
> such as OCSP responders or CT loggers) to off-the-shelf enterprise CA
> packages such as Microsoft Windows Server Certificate Services, xca
> or EJBCA.
>
> * How do we prevent public CAs from misusing the exceptions for
> private CAs?

Isn't that already solved?

Browsers already treat manually installed roots differently, e.g.
bypassing key pinning. Chrome's CT requirements don't apply to locally
installed roots.

This seems to be the obvious solution. CT is there to have transparency
of certificates by the browser-accepted CAs. If you have your own CA
that souldn't be touched by that at all.

(By the way I always found the "secret server name" idea wrong and I
would generally recommend against local CAs in almost all cases. It
adds a lot of complexity and I assume it often creates more problems
than it solves.)

Martin Rublik

unread,
Nov 4, 2016, 10:51:31 AM11/4/16
to Hanno Böck, dev-secur...@lists.mozilla.org
On Fri, Nov 4, 2016 at 3:42 PM, Hanno Böck <ha...@hboeck.de> wrote:

>
> Isn't that already solved?
>
> Browsers already treat manually installed roots differently, e.g.
> bypassing key pinning. Chrome's CT requirements don't apply to locally
> installed roots.
>
> How about public technically constrained sub CAs?

(By the way I always found the "secret server name" idea wrong and I
> would generally recommend against local CAs in almost all cases. It
> adds a lot of complexity and I assume it often creates more problems
> than it solves.)
>
> Agree

Martin

Tom Ritter

unread,
Nov 4, 2016, 11:05:32 AM11/4/16
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On 4 November 2016 at 07:19, Gervase Markham <ge...@mozilla.org> wrote:
> * How do we decide when to un-trust a log? What reasons are valid
> reasons for doing so?

Do we want different types of distrust for a log? That is, a "We don't
trust you at all anymore" distrust vs a "We don't trust signatures
issued after this date" distrust.


> * Do we want to require a certain number of SCTs for certificates of
> particular validity periods?

Do we want to trest different types of SCTs differently for this
purpose? (precert vs OCSP vs TLS Extension.)

> * Do we want to allow some CAs to opt into CT before those dates?

Do we want to allow some websites to opt into CT before those dates?

Kurt Roeckx

unread,
Nov 4, 2016, 11:19:58 AM11/4/16
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On Fri, Nov 04, 2016 at 12:19:32PM +0000, Gervase Markham wrote:
>
> * Do we want to require a certain number of SCTs for certificates of
> particular validity periods?

What happens to the SCT requirements if a log is distrusted? Is the
date of the distrust taking into account? Is that the same for all
distrust cases?


Kurt

okaphone.e...@gmail.com

unread,
Nov 4, 2016, 12:46:29 PM11/4/16
to mozilla-dev-s...@lists.mozilla.org
Well, these are logs. So:

- Is it necessary to require that log items can't be modified after they have been created? (Or is that implied by the cryptography being used.) How about deleted?

- Is is perhaps a good idea to require a certain minimum accuracy (or other characteristics, timestamps only increase) for it's clock?

- Maybe you should consider what will happen if/when an important log stops to be available at some point in the future. Will anything break?

- And I already mentioned it, but availability of 99% is not as good as it sounds. It means three and a half days down in a year is allowed.

Jakob Bohm

unread,
Nov 4, 2016, 12:51:08 PM11/4/16
to mozilla-dev-s...@lists.mozilla.org
On 04/11/2016 15:42, Hanno Böck wrote:
> On Fri, 4 Nov 2016 14:09:55 +0100
> Jakob Bohm <jb-mo...@wisemo.com> wrote:
>
>> * How do we allow organization internal non-public CAs to not reveal
>> their secret membership/server lists to public CT systems or
>> otherwise run the (administratively and technically) expensive
>> processes required of public CAs. For example many medium or large
>> companies have in-house CAs issuing certificates for communicating
>> with their internal servers, VPNs, extranets etc. Such internal CAs
>> may very from primitive off-line CAs (no online active components
>> such as OCSP responders or CT loggers) to off-the-shelf enterprise CA
>> packages such as Microsoft Windows Server Certificate Services, xca
>> or EJBCA.
>>
>> * How do we prevent public CAs from misusing the exceptions for
>> private CAs?
>
> Isn't that already solved?
>
> Browsers already treat manually installed roots differently, e.g.
> bypassing key pinning. Chrome's CT requirements don't apply to locally
> installed roots.
>
> This seems to be the obvious solution. CT is there to have transparency
> of certificates by the browser-accepted CAs. If you have your own CA
> that souldn't be touched by that at all.
>

Sometimes the people designing implementations forget about this use
case and do things that are bad in that context. For example it is
routine (actually required) for public CAs to have working OCSP servers
and report revoked certificates to services such as Mozilla OneCRL.
But it is very difficult for a small scale in-house offline CA to do
either, while trivial to publish a short regular CRL on an existing
internal HTTP server next to the document listing this weeks lunch menu
and the employee handbook.

> (By the way I always found the "secret server name" idea wrong and I
> would generally recommend against local CAs in almost all cases. It
> adds a lot of complexity and I assume it often creates more problems
> than it solves.)
>

Think of it this way:

90%+ of all the worlds computers are not public servers, they are
workstations, file servers, printers, firewalls, databases, phones etc.
etc. Many of those may have certificate protected interfaces such as
management web pages, encrypted mail/news services, certificate based
IPsec etc. etc.

Many general computer management and/or server suites contain the logic
to fully or almost fully automate the handling of certificate issuance
and use for those servers. Microsoft Windows Server is one such
"suite" that has the whole thing built in and can use it to secure
internal network traffic.

The very name/existence of a server may reveal confidential information
(besides being a target of outside network attacks). Imagine if
someone had spotted the name "development.webbrowser.google.com" in a
CT log before Chrome was publicly announced.

While keeping the existence of something secret is no substitute for
actually protecting it, it is often a reasonable first layer of
defense, leaving much less work for the actual defensive measures to
handle. It can mean the difference between getting hit by 200 script
kiddies / day and 10 script kiddies / day, making it easier to spot
serious attacks, and requiring less CPU and bandwidth resources to
deflect and log that background noise.

Han Yuwei

unread,
Nov 4, 2016, 3:43:22 PM11/4/16
to mozilla-dev-s...@lists.mozilla.org
在 2016年11月4日星期五 UTC+8下午8:20:11,Gervase Markham写道:
1. What will happen if CT validation failed? Can we add a security excpetion about this?

2. Is SLA required for Mozilla chosen CT operator?

3. If CT is required, can we request a CT embedded certificate from CAs because some webserver don't support TLS extension.

Jeremy Rowley

unread,
Nov 4, 2016, 4:11:42 PM11/4/16
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
This is awesome. We're very excited to see Mozilla support CT.

How about:
1) What version of logs should Mozilla accept (do they have to comply with
the bis)? If they are compliant with the original spec, should they be
accepted until a certain date when they must transition to the new bis?

2) How long should logs operate before being trusted? Is there a period of
time for testing to ensure operational robustness?

3) How will Mozilla support the three options for providing proofs? OCSP
stapling v. embedment v. TLS extensions.
_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Florian M.

unread,
Nov 5, 2016, 4:45:38 AM11/5/16
to mozilla-dev-s...@lists.mozilla.org
Hey list,

Here are some suggestions:

Should we define log algorithm/key requirements (hashing algorithms (relevant with RFC6962-bis), asymmetric key type and length)?

Should we define a maximum threshold on log response delay to queries? (e.g. is it acceptable for a log to answer to queries with a delay of tens of seconds or even minutes?)

Should we authorize log trust anchor list variations? If so, should variations have to be publicly disclosed? Should we authorize removal of trust anchors?

Should a log be authorized to reject add-chain calls when under stress? Should we limit how often this happens?

Should we want to restrict the protocol version and cipher suites that are supported by the log HTTPs endpoints?

Cheers,
Florian

Tom Ritter

unread,
Nov 5, 2016, 1:59:04 PM11/5/16
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On 4 November 2016 at 07:19, Gervase Markham <ge...@mozilla.org> wrote:
> * Are there any CT-related services Mozilla should consider running or
> supporting, for the good of the ecosystem?

Part answer, part question, but I don't want to forget it: Besides an
Auditor, perhaps Mozilla should run a DNS log query front-end to
provide diversity from Google's.

-tom

Ryan Sleevi

unread,
Nov 5, 2016, 3:26:47 PM11/5/16
to mozilla-dev-s...@lists.mozilla.org
On Saturday, November 5, 2016 at 10:59:04 AM UTC-7, Tom Ritter wrote:
> > * Are there any CT-related services Mozilla should consider running or
> > supporting, for the good of the ecosystem?
>
> Part answer, part question, but I don't want to forget it: Besides an
> Auditor, perhaps Mozilla should run a DNS log query front-end to
> provide diversity from Google's.
>
> -tom

For specificity sake: Tom's talking about having Mozilla operate a set of DNS endpoints that implement https://github.com/google/certificate-transparency/blob/master/docs/DnsServer.md

To implement that, it effectively requires running a set of CT mirrors, so that you can provide the merkle tree inclusion proofs for arbitrary SCTs and STHs.

If not that, then it's worth Mozilla noodling how it wants to check SCTs' inclusion in a privacy preserving fashion. It may very well be that Mozilla feels that DNS doesn't afford that privacy. However, it would be super useful to know that and for implementors - Mozilla, Apple, others - to help collaboratively figure out solutions rather than inventing new ones ad-hoc :)

Ryan Sleevi

unread,
Nov 5, 2016, 3:33:21 PM11/5/16
to mozilla-dev-s...@lists.mozilla.org
On Friday, November 4, 2016 at 5:32:23 AM UTC-7, Hanno Böck wrote:
> Hi,
>
> Great to see Mozilla committing to CT.
>
> On Fri, 4 Nov 2016 12:19:32 +0000
> Gervase Markham <ge...@mozilla.org> wrote:
>
> > So, please add comments with additional _questions_ you think the
> > policy will need to address. What the answers should be is (for now)
> > off-topic.
>
> Some meta-thought:
> In practice pretty much every webpage wants to be compliant with all
> major browsers. It's hard to imagine anyone saying "I'll comply with
> Chrome's CT requirements, but not with Mozilla's" (or vice versa).
>
> Therefore practically the "real" CT requirements will be all
> requirements combined. It also probably means that diverity in CT
> requirements between different browsers doesn't make a whole lot of
> sense.
>
> So one could ask: Should mozilla just say "we agree with everything
> Chrome does" ?

This is a very useful point to consider. As has been repeatedly stated in Chrome's discussion of CT, the requirement for one Google/one non-Google is due to the absence of SCT->STH verification, and the absence of Gossip, in current versions of Chrome (it's being worked on). In the absence of this, if you trust an SCT from a log, you're effectively trusting that log to be honest - there's no validation occurring that the log isn't providing a split view (perhaps to users inside a particular network block/country and those outside)

My understanding was that Mozilla's implementation status was similar to Chrome's a year ago - that is, that it doesn't implement inclusion proof fetching (in the background) and that work hadn't been scheduled/slated yet. In that case, it's a question for Mozilla about whether to trust that logs won't lie, or whether to verify.

In Google's case, the one-Google/one-non-Google was merely an interim stopgap towards that solution - because it resolves the "I don't trust other logs" aspect by trusting Google logs (which, at least for Chrome's threat model, is no different than trusting Google not to deliver hostile updates to Chrome users), while giving others the relief that they don't have to solely/entirely trust Google (by having a third-party expression).

So I think the question about 'CT Qualification' (to steal Chrome's term) is at least useful to consider that point, about threat models - which, admittedly, I haven't been very good at articulating on Chrome's ct-policy@ all the ways in which we expect/planned for things to go wrong, and attempted to design against that, as well as all the things we knew we didn't know solutions for, in which case, we tried to leave things open.

I'll hopefully have more news in the following weeks, on Chrome's ct-policy list, but I think we're interested in hosting another "CT days" event, this time in the US, similar to the events we held in conjunction with the ETSI CA days in Europe. This would hopefully provide for an opportunity of a real time roundtable and discussion of these sorts of issues, across browsers, log operators, and relying parties, to better explore how we can develop both an immediate and long-term healthy ecosystem. Think of it as the "CT/Log Roundtable", and we'll try to figure out solutions for remote participation as well.

Gervase Markham

unread,
Nov 7, 2016, 4:59:31 AM11/7/16
to Ryan Sleevi
On 05/11/16 19:33, Ryan Sleevi wrote:
> My understanding was that Mozilla's implementation status was similar
> to Chrome's a year ago - that is, that it doesn't implement inclusion
> proof fetching (in the background) and that work hadn't been
> scheduled/slated yet. In that case, it's a question for Mozilla about
> whether to trust that logs won't lie, or whether to verify.

It is correct that there is not yet a plan for when Firefox might
implement inclusion proof fetching.

One thing I have been pondering is checking the honesty of logs via
geographically distributed checks done by infra rather than clients. Did
Google consider that too easy to game?

Gerv

Ryan Sleevi

unread,
Nov 7, 2016, 11:13:12 AM11/7/16
to mozilla-dev-s...@lists.mozilla.org
On Monday, November 7, 2016 at 1:59:31 AM UTC-8, Gervase Markham wrote:
> It is correct that there is not yet a plan for when Firefox might
> implement inclusion proof fetching.
>
> One thing I have been pondering is checking the honesty of logs via
> geographically distributed checks done by infra rather than clients. Did
> Google consider that too easy to game?

Yes, particularly for logs that may be compelled to be dishonest for geopolitical reasons.

Gervase Markham

unread,
Nov 7, 2016, 12:02:37 PM11/7/16
to Ryan Sleevi
On 07/11/16 16:13, Ryan Sleevi wrote:
> Yes, particularly for logs that may be compelled to be dishonest for geopolitical reasons.

As in, their dishonesty would be carefully targetted and so not exposed
by this sort of coarse checking?

Gerv

Ryan Sleevi

unread,
Nov 7, 2016, 12:25:43 PM11/7/16
to mozilla-dev-s...@lists.mozilla.org
On Monday, November 7, 2016 at 9:02:37 AM UTC-8, Gervase Markham wrote:
> As in, their dishonesty would be carefully targetted and so not exposed
> by this sort of coarse checking?

(Continuing with Google/Chrome hat on, since I didn't make the previous reply explicit)

Yes. An 'evil log' can provide a divided split-view, targeting only an affected number of users. Unless that SCT was observed, and reported (via Gossip or some other means of exfiltration), that split view would not be detected.

Recall: In order to ensure a log is honest, you need to ensure it's providing consistent views of the STH *and* that SCTs are being checked. In the absence of the latter, you don't need to do the former - and that's infrastructure for monitoring primarily focuses on the STH consistency, with the assumption/expectation that clients are doing the SCT inclusion proof fetching.

So if I were wanting to run an evil log, which could hide misissued certificates, I could sufficiently compel or coerce a quorum of acceptable logs to 'misissue' an SCT for which they never incorporated into their STH. So long as clients don't ask for an inclusion proof of this SCT, there's no need for a split log - and no ability for infrastructure to detect. You could use such a certificate in targeted, user-specific attacks.

This is why it's vitally important that clients fetch inclusion proofs in some manner (either through gossip or through 'privacy' intermediaries, which is effectively what the Google DNS proposal is - using your ISP's DNS hierarchy as the privacy preserving aspect), and then check that the STH is consistent (which, in the case of Chrome, Chrome clients checking Google's DNS servers is effectively an STH consistency proof with what Google sees).

In the absence of this implementation, checking the SCT provides limited guarantee that a certificate has actually been logged - in effect, you're making a full statement that you trust the log to be honest. Google's goal for Certificate Transparency has been to not trust logs to be honest, but to verify - but as Chrome builds out it's implementation, it has to 'trust someone' - and given our broader analysis of the threat model and scenario, the decision to "trust Google" (by requiring at least one SCT from a Google-operated log) is seen as no worse than existing "trust Google" requests existing Chrome users are asked of (for example, trusting Chrome's autoupdate will not be compromised, trusting Google not to deliver targeted malicious code). [1]

Thus, in the absence of SCT inclusion proof checking (whether temporarily, as implementations blossom, or permanently, if you feel there can be no suitable privacy-preserving solution), you're trusting the logs not to misbehave, much like you trust CAs not to misbehave. You can explore technical solutions - such as inclusion proof checking - or you can explore policy solutions - such as requiring a Mozilla log, or requiring logs have some criteria to abide by ala WebTrust for CAs, or who knows what - but it's at least useful to understand the context for why that decision exists, and what the trust tradeoffs are with such a decision.


[1] As an aside, this "trust Google for binaries" bit is being explored in concepts like Binary Transparency, a very nascant and early stage exploration in how to provide reliable assurances that binaries aren't targeted. Similarly, the work of folks on verifiable builds, such as shown by Tor Browser Bundle, are meant to address the case of no 'obvious' backdoors, but the situation is more complex when non-open code is involved. I call this out to highlight that the computer industry has still not solved this, and even if we did for software, we have compilers hardware to contend with, and then we're very much into "Reflections on Trusting Trust" territory.

Tom Ritter

unread,
Nov 7, 2016, 12:43:27 PM11/7/16
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
On 7 November 2016 at 11:25, Ryan Sleevi <ry...@sleevi.com> wrote:
> On Monday, November 7, 2016 at 9:02:37 AM UTC-8, Gervase Markham wrote:
>> As in, their dishonesty would be carefully targetted and so not exposed
>> by this sort of coarse checking?
>
> (Continuing with Google/Chrome hat on, since I didn't make the previous reply explicit)
>
> Yes. An 'evil log' can provide a divided split-view, targeting only an affected number of users. Unless that SCT was observed, and reported (via Gossip or some other means of exfiltration), that split view would not be detected.
>
> Recall: In order to ensure a log is honest, you need to ensure it's providing consistent views of the STH *and* that SCTs are being checked. In the absence of the latter, you don't need to do the former - and that's infrastructure for monitoring primarily focuses on the STH consistency, with the assumption/expectation that clients are doing the SCT inclusion proof fetching.
>
> So if I were wanting to run an evil log, which could hide misissued certificates, I could sufficiently compel or coerce a quorum of acceptable logs to 'misissue' an SCT for which they never incorporated into their STH. So long as clients don't ask for an inclusion proof of this SCT, there's no need for a split log - and no ability for infrastructure to detect. You could use such a certificate in targeted, user-specific attacks.
>
> This is why it's vitally important that clients fetch inclusion proofs in some manner (either through gossip or through 'privacy' intermediaries, which is effectively what the Google DNS proposal is - using your ISP's DNS hierarchy as the privacy preserving aspect), and then check that the STH is consistent (which, in the case of Chrome, Chrome clients checking Google's DNS servers is effectively an STH consistency proof with what Google sees).
>
> In the absence of this implementation, checking the SCT provides limited guarantee that a certificate has actually been logged - in effect, you're making a full statement that you trust the log to be honest. Google's goal for Certificate Transparency has been to not trust logs to be honest, but to verify - but as Chrome builds out it's implementation, it has to 'trust someone' - and given our broader analysis of the threat model and scenario, the decision to "trust Google" (by requiring at least one SCT from a Google-operated log) is seen as no worse than existing "trust Google" requests existing Chrome users are asked of (for example, trusting Chrome's autoupdate will not be compromised, trusting Google not to deliver targeted malicious code). [1]
>
> Thus, in the absence of SCT inclusion proof checking (whether temporarily, as implementations blossom, or permanently, if you feel there can be no suitable privacy-preserving solution), you're trusting the logs not to misbehave, much like you trust CAs not to misbehave. You can explore technical solutions - such as inclusion proof checking - or you can explore policy solutions - such as requiring a Mozilla log, or requiring logs have some criteria to abide by ala WebTrust for CAs, or who knows what - but it's at least useful to understand the context for why that decision exists, and what the trust tradeoffs are with such a decision.

++

I feel compelled to note that we have an IETF draft on Gossip to
address this need:
https://datatracker.ietf.org/doc/draft-ietf-trans-gossip/ so if people
are unaware of it, please read it and give us feedback on the IETF
[trans] list. The document needs review, but we think we're done it
excepting community review.

On the point of SCT Inclusion Proofs, we propose three options:
- Client fetches the Inclusion proof via DNS[0] and pollinates the
resulting STH which should be privacy preserving
- Client does _not_ fetch an inclusion proof, but provides historical
SCTs to the website of origin[1] for auditors to collect
- The client ships its browsing history to some third party wholesale.

We're not too keen on the last one ;)

But please, read the doc and give us feedback on [trans] - I don't
want to divert this thread =)

-tom

[0] Or any other privacy preserving mechanism, such as Tor, but
realistically DNS is going to be the main option for now.
[1] We assume an attacker cannot MITM a website permanently, and the
algorithm is designed such that in 'most cases' the evidence of an
attack is preserved by the client for submission after the MITM ends.

Kurt Roeckx

unread,
Nov 8, 2016, 3:54:44 AM11/8/16
to mozilla-dev-s...@lists.mozilla.org
On 2016-11-07 18:25, Ryan Sleevi wrote:
> This is why it's vitally important that clients fetch inclusion proofs in some manner

Have you considered a TLS extension, have the server fetch them and send
to the client?


Kurt

Gervase Markham

unread,
Nov 8, 2016, 5:06:13 AM11/8/16
to Ryan Sleevi
On 07/11/16 17:25, Ryan Sleevi wrote:
> Yes. An 'evil log' can provide a divided split-view, targeting only
> an affected number of users. Unless that SCT was observed, and
> reported (via Gossip or some other means of exfiltration), that split
> view would not be detected.

So it is therefore important not just that the client which receives the
SCT checks it against an STH it can observe, but that it is reported
elsewhere for others to check? Or that a client has a method of fetching
inclusion proofs that were "observed" from elsewhere?

> So if I were wanting to run an evil log, which could hide misissued
> certificates, I could sufficiently compel or coerce a quorum of
> acceptable logs

With "quorum" effectively being the smallest number of permitted SCTs,
i.e. two.

Presumably this is one reason some people are suggesting Mozilla's
policy have a jurisdictional diversity requirement - to make such
coercion harder.

Gerv

Kurt Roeckx

unread,
Nov 8, 2016, 5:39:14 AM11/8/16
to mozilla-dev-s...@lists.mozilla.org
On 2016-11-08 11:05, Gervase Markham wrote:
> On 07/11/16 17:25, Ryan Sleevi wrote:
>> Yes. An 'evil log' can provide a divided split-view, targeting only
>> an affected number of users. Unless that SCT was observed, and
>> reported (via Gossip or some other means of exfiltration), that split
>> view would not be detected.
>
> So it is therefore important not just that the client which receives the
> SCT checks it against an STH it can observe, but that it is reported
> elsewhere for others to check? Or that a client has a method of fetching
> inclusion proofs that were "observed" from elsewhere?

From what I understand, if the clients verify the SCTs to be included
in some STHs, we want to be sure that other people also see those STHs
to be able to detect a split view. If the clients doesn't verify the
SCTs to be included in an STH, we want to be able to get the SCTs it
sees to see that they end up in an STH within the merge delay.


Kurt

Ryan Sleevi

unread,
Nov 8, 2016, 11:51:24 AM11/8/16
to Gervase Markham, Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
On Tue, Nov 8, 2016 at 2:05 AM, Gervase Markham <ge...@mozilla.org> wrote:
> On 07/11/16 17:25, Ryan Sleevi wrote:
>> Yes. An 'evil log' can provide a divided split-view, targeting only
>> an affected number of users. Unless that SCT was observed, and
>> reported (via Gossip or some other means of exfiltration), that split
>> view would not be detected.
>
> So it is therefore important not just that the client which receives the
> SCT checks it against an STH it can observe, but that it is reported
> elsewhere for others to check? Or that a client has a method of fetching
> inclusion proofs that were "observed" from elsewhere?

No, this isn't quite a correct understanding :)

If your goal is to detect a split view, exchanging STHs, not SCTs, is
sufficient. However, if you want to determine what was misissued, you
need the SCTs to show that - the STHs will just show you that there's
some unknown.

However, exchanging STHs by itself doesn't provide any security
guarantees - if you're not checking SCTs to STHs, then a log operator
never has reason to lie about the STH, and can simply omit
certificates without splitting STHs. However, if a client checks SCTs
to STHs, they can't be sure they're not getting a split view, without
also checking others' STHs.

In Chrome's case, it receives a list daily from Google of the STHs
that Google has observed, and then compares its SCTs against those
STHs from the log. As such, the log cannot hide a split view - even if
it lies about the STH to the client, it will still have to prove the
STH it gave to the client against the STH Google saw. However, here
still, the importance is that the client needs to send some signal
indicating it's receiving a split view. This is where Gossip comes in.

> Presumably this is one reason some people are suggesting Mozilla's
> policy have a jurisdictional diversity requirement - to make such
> coercion harder.

Possibly, but I encourage you to review the past CA/Browser Forum
discussions about CT, and the ct-policy list, to understand why Google
intentionally removed it's "diversity" requirement as being ambiguous
and unenforcable, and contributing more harm than good.

For any system of diversity to be relevant, you must be able to
quantify it, and you must be able to quantify it over time. As the
situation with StartCom/WoSign/Qihoo showed, both Mozilla and the
broader ecosystem are not well suited to continuously monitor the
complex legal system of ownership, let alone nexus' of business
operations. And if you can't be certain, and can't measure it, then
are you actually providing value?

Ryan Sleevi

unread,
Nov 8, 2016, 11:53:22 AM11/8/16
to Kurt Roeckx, mozilla-dev-s...@lists.mozilla.org
On Tue, Nov 8, 2016 at 12:53 AM, Kurt Roeckx <ku...@roeckx.be> wrote:
> On 2016-11-07 18:25, Ryan Sleevi wrote:
>>
>> This is why it's vitally important that clients fetch inclusion proofs in
>> some manner
>
>
> Have you considered a TLS extension, have the server fetch them and send to
> the client?

Yes, but the client still has to fetch proofs (this would be from
STH-server to STH-client or from STH-server-A to STH-server-B) and
much of the data would be duplicative (because it's a merkle tree). It
would also have to be continually updated by the servers.

And of course, the simplest reason of all, which is that if it relies
on server change, it won't happen.

Jakob Bohm

unread,
Nov 8, 2016, 2:25:08 PM11/8/16
to mozilla-dev-s...@lists.mozilla.org
On 08/11/2016 17:50, Ryan Sleevi wrote:
> On Tue, Nov 8, 2016 at 2:05 AM, Gervase Markham <ge...@mozilla.org> wrote:
>>
>> ...
>...
>
>> Presumably this is one reason some people are suggesting Mozilla's
>> policy have a jurisdictional diversity requirement - to make such
>> coercion harder.
>
> Possibly, but I encourage you to review the past CA/Browser Forum
> discussions about CT, and the ct-policy list, to understand why Google
> intentionally removed it's "diversity" requirement as being ambiguous
> and unenforcable, and contributing more harm than good.
>
> For any system of diversity to be relevant, you must be able to
> quantify it, and you must be able to quantify it over time. As the
> situation with StartCom/WoSign/Qihoo showed, both Mozilla and the
> broader ecosystem are not well suited to continuously monitor the
> complex legal system of ownership, let alone nexus' of business
> operations. And if you can't be certain, and can't measure it, then
> are you actually providing value?
>

Diversity requirements are about reducing the likelihood of
simultaneous coercion, as it can never be ruled out that some powerful
organization already engaged in such things could use some of its
backhanded tactics to subvert a log operator that is entirely outside
its direct jurisdiction.

History has taught us that such things do happen from time to time.

Ryan Sleevi

unread,
Nov 8, 2016, 2:51:47 PM11/8/16
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Tue, Nov 8, 2016 at 11:24 AM, Jakob Bohm <jb-mo...@wisemo.com> wrote:
> Diversity requirements are about reducing the likelihood of
> simultaneous coercion, as it can never be ruled out that some powerful
> organization already engaged in such things could use some of its
> backhanded tactics to subvert a log operator that is entirely outside
> its direct jurisdiction.
>
> History has taught us that such things do happen from time to time.

Having written the original diversity requirement for Chrome, I'm
quite familiar with what they are intended to be used for - I'm just
saying that, as a practical matter, if you work through an actual
threat model, you'll find that they fail rather considerably in
everything other than 'feel good'.

As it specifically relates to CT, it might help if you fully
articulate what you anticipate the threat is - right now, it is
presumably legal coercion - and then think about how, using the
existing logs, you would quantify that risk.

The counter-argument against diversity requirements is that, rather
than relying of policy elements around unquantifiable or, for an
organization of both Mozilla and Google's side, unrealistic, data
points, you can instead rely on technical measures that provide the
same assurances. Such as checking inclusion proofs of STHs, checking
consistency of STHs, and gossiping views. After exhausting those
technical solutions, question again whether policy is correct.

Similarly, once your threat model is actually articulated, evaluate
the risk of an arbitrary (as it necessarily is) diversity requirement
and its harm on the ecosystem or cost to Mozilla to maintain the
appearances of it, against the practical and perceived risks of being
more liberal.

Trust is a spectrum, and the calculus can be quite difficult, but
again - as the example of WoSign/StartCom/Qihoo showed - it can be
incredibly expensive and unrealistic to think it will be enforced
solely through goodwill. It took nearly 8 months for Mozilla to obtain
sufficient evidence of the relationship between those organizations.
And if you think that's unacceptably long, then perhaps the policy
isn't the right answer, because that's a bit of an optimistic look at
how things go, with multiple people dedicating significant amounts of
time to understand the issues.

Jakob Bohm

unread,
Nov 8, 2016, 3:08:29 PM11/8/16
to mozilla-dev-s...@lists.mozilla.org
I was responding to your simplistic argument that the existence of
ownership change detection failures made diversity requirements
worthless. I was not calculating their actual worth compared to other
measures.

Ryan Sleevi

unread,
Nov 8, 2016, 3:14:54 PM11/8/16
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Tue, Nov 8, 2016 at 12:07 PM, Jakob Bohm <jb-mo...@wisemo.com> wrote:
> I was responding to your simplistic argument that the existence of
> ownership change detection failures made diversity requirements
> worthless. I was not calculating their actual worth compared to other
> measures.

Then that's an argument simply for appearances sake, without providing
any actual value - and the simplistic argument is all that's necessary
to show that it's an unreasonable burden without tangible value.
0 new messages