Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

DarkMatter Concerns

16,484 views
Skip to first unread message

Wayne Thayer

unread,
Feb 22, 2019, 4:21:24 PM2/22/19
to mozilla-dev-security-policy
The recent Reuters report on DarkMatter [1] has prompted numerous questions
about their root inclusion request [2]. The questions that are being raised
are equally applicable to their current status as a subordinate CA under
QuoVadis (recently acquired by DigiCert [3]), so it seems appropriate to
open up a discussion now. The purpose of this discussion is to determine if
Mozilla should distrust DarkMatter by adding their intermediate CA
certificates that were signed by QuoVadis to OneCRL, and in turn deny the
pending root inclusion request.

The rationale for distrust is that multiple sources [1][4][5] have provided
credible evidence that spying activities, including use of sophisticated
targeted surveillance tools, are a key component of DarkMatter’s business,
and such an organization cannot and should not be trusted by Mozilla. In
the past Mozilla has taken action against CAs found to have issued MitM
certificates [6][7]. We are not aware of direct evidence of misused
certificates in this case. However, the evidence does strongly suggest that
misuse is likely to occur, if it has not already.

Mozilla’s Root Store Policy [8] grants us the discretion to take actions
based on the risk to people who use our products. Despite the lack of
direct evidence of misissuance by DarkMatter, this may be a time when we
should use our discretion to act in the interest of individuals who rely on
our root store.

I would greatly appreciate everyone's constructive input on this issue.

- Wayne

[1] https://www.reuters.com/investigates/special-report/usa-spying-raven/

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262

[3]
https://groups.google.com/d/msg/mozilla.dev.security.policy/hicp7AW8sLA/KUSn20MrDgAJ

[4]
https://www.evilsocket.net/2016/07/27/How-The-United-Arab-Emirates-Intelligence-Tried-to-Hire-me-to-Spy-on-its-People/

[5]
https://theintercept.com/2016/10/24/darkmatter-united-arab-emirates-spies-for-hire/

[6]
https://groups.google.com/d/msg/mozilla.dev.security.policy/czwlDNbwHXM/Fj-LUvhVQYEJ

[7] https://bugzilla.mozilla.org/show_bug.cgi?id=1232689
[8]
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/

Corey Bonnell

unread,
Feb 22, 2019, 4:28:30 PM2/22/19
to mozilla-dev-s...@lists.mozilla.org
Hello,
Section 7.1 of the Baseline Requirements states that, "Effective September 30, 2016, CAs SHALL generate non-sequential Certificate serial numbers greater than zero (0) containing at least 64 bits of output from a CSPRNG".

An analysis of the 23 known certificates (4 root CA, 6 ICA, 1 Audit CA, and 12 end-entity interoperability/test certificates) in the DarkMatter certificate hierarchy currently listed in the inclusion request indicates that the hierarchy is likely not compliant with this requirement. Specifically, all 23 certificates have an 8-octet (64 bit) serial number, but the most significant bit (big-endian) of their serial numbers is 0. The probability that all 23 known certificates in the hierarchy having exactly 64 bits of output from a CSPRNG with a most significant bit value of 0 is 1 in 2^23, or 1 in 8,388,608, which would make this a highly extraordinary event. The far more likely case is that each certificate's serial number does not contain the requisite number of bits (64) output from a CSPRNG.

A detailed breakdown is as follows:

"subject CN", notBefore, "serial number", "highest bit index set (1-based indexing)"

"UAE Global Root CA G3", 2017-05-17, 47:0E:FF:0B:2E:B3:83:40, 63
"UAE Global Root CA G4", 2017-05-17, 1E:86:4A:1C:01:B1:46:3F, 61
"DarkMatter Audit CA", 2017-05-18, 7B:02:F9:F1:42:64:C1:42, 63
"DarkMatter Root CA G3", 2017-05-18, 6A:E6:CC:D1:A8:29:7F:EB, 63
"DarkMatter Root CA G4", 2017-05-18, 61:17:4D:F7:2B:EC:5F:84, 63
"DigitalX1 High Assurance CA G3", 2018-06-28, 75:50:D6:6F:78:B4:BD:F5, 63
"DigitalX1 High Assurance CA G4", 2018-06-28, 6E:E0:2C:70:C9:43:17:16, 63
"DM X1 High Assurance CA G3", 2018-06-28, 7D:DE:FE:2D:9F:05:74:DE, 63
"DM X1 High Assurance CA G4", 2018-06-28, 40:17:D7:B9:DD:ED:20:55, 63
exped.ca.darkmatter.ae, 2018-07-05, 12:5E:AF:E7:93:49:9A:95, 61
expeu.ca.darkmatter.ae, 2018-07-05, 2B:65:3C:DB:5C:7D:F6:56, 62
exprd.ca.darkmatter.ae, 2018-07-05, 55:E4:A1:AE:7C:A3:64:14, 63
expru.ca.darkmatter.ae, 2018-07-05, 74:EE:AC:88:24:3D:F9:E4, 63
reved.ca.darkmatter.ae, 2018-07-05, 1A:E6:DA:22:59:06:AD:C1, 61
reveu.ca.darkmatter.ae, 2018-07-05, 0D:B3:3E:12:5A:4A:11:DF, 60
revrd.ca.darkmatter.ae, 2018-07-05, 0D:C6:08:0C:4F:B4:76:63, 60
revru.ca.darkmatter.ae, 2018-07-05, 6F:5A:8C:4F:54:FC:3A:E2, 63
valed.ca.darkmatter.ae, 2018-07-05, 1E:40:E3:2D:DF:21:95:59, 61
valeu.ca.darkmatter.ae, 2018-07-05, 65:84:D9:1D:F5:CE:C5:89, 63
valrd.ca.darkmatter.ae, 2018-07-05, 3F:53:0E:C5:7D:F5:83:C5, 62
valru.ca.darkmatter.ae, 2018-07-05, 14:CB:73:81:18:20:C5:25, 61
"DigitalX1 Assured CA G4", 2018-09-05, 6D:9F:01:8E:40:8E:5F:7F, 63
"DM X1 Assured CA G4", 2018-09-05, 71:9C:24:E8:9A:D9:44:AB, 63

Thanks,
Corey

Jonathan Rudenberg

unread,
Feb 22, 2019, 5:37:20 PM2/22/19
to dev-secur...@lists.mozilla.org
On Fri, Feb 22, 2019, at 16:21, Wayne Thayer via dev-security-policy wrote:
> Despite the lack of
> direct evidence of misissuance by DarkMatter, this may be a time when we
> should use our discretion to act in the interest of individuals who rely on
> our root store.

It's worth noting that DarkMatter has already been documented to have misissued certificates, though not in a way that is obviously for malicious purposes.

1) As discovered by Rob Stradling[1], they issued at least two certificates with a CN that was not included in the SAN extension. An incident report was requested[2], but I was unable to find it in Bugzilla or on this mailing list.

2) https://crt.sh/?id=271084003&opt=zlint - This certificate has an invalid domain `apiuat.o`. I'm not aware of prior discussion about this.

With regards to the broader question, I believe that DarkMatter's alleged involvement with hacking campaigns is incompatible with operating a trustworthy CA. This combined with the existing record of apparent incompetence by DarkMatter (compare the inclusion bugs for other recently approved CAs for contrast), makes me believe that the approval request should be denied and the existing intermediates revoked via OneCRL. I don't see how approving them, or the continued trust in their intermediates, would be in the interests of Mozilla's users or compatible with the Mozilla Manifesto.

Jonathan

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262#c29
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262#c32

coo...@gmail.com

unread,
Feb 22, 2019, 6:51:52 PM2/22/19
to mozilla-dev-s...@lists.mozilla.org
On Friday, February 22, 2019 at 2:37:20 PM UTC-8, Jonathan Rudenberg wrote:
> With regards to the broader question, I believe that DarkMatter's alleged involvement with hacking campaigns is incompatible with operating a trustworthy CA. This combined with the existing record of apparent incompetence by DarkMatter (compare the inclusion bugs for other recently approved CAs for contrast), makes me believe that the approval request should be denied and the existing intermediates revoked via OneCRL. I don't see how approving them, or the continued trust in their intermediates, would be in the interests of Mozilla's users or compatible with the Mozilla Manifesto.
>
> Jonathan
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262#c29
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262#c32

I wrote a post about this issue this morning for EFF: https://www.eff.org/deeplinks/2019/02/cyber-mercenary-groups-shouldnt-be-trusted-your-browser-or-anywhere-else

Given DarkMatter's business interest in intercepting TLS communications adding them to the trusted root list seems like a very bad idea. (I would go so far as revoking their intermediate certificate as well, based on these revelations.)

rjarr...@yahoo.com

unread,
Feb 22, 2019, 7:35:01 PM2/22/19
to mozilla-dev-s...@lists.mozilla.org
I can't trust the Dark Matter CA for a minute. It's a threat to national security. It's a national security issue.

Kurt Roeckx

unread,
Feb 23, 2019, 4:16:37 AM2/23/19
to dev-secur...@lists.mozilla.org
On Fri, Feb 22, 2019 at 03:45:39PM -0800, cooperq--- via dev-security-policy wrote:
> On Friday, February 22, 2019 at 2:37:20 PM UTC-8, Jonathan Rudenberg wrote:
> > With regards to the broader question, I believe that DarkMatter's alleged involvement with hacking campaigns is incompatible with operating a trustworthy CA. This combined with the existing record of apparent incompetence by DarkMatter (compare the inclusion bugs for other recently approved CAs for contrast), makes me believe that the approval request should be denied and the existing intermediates revoked via OneCRL. I don't see how approving them, or the continued trust in their intermediates, would be in the interests of Mozilla's users or compatible with the Mozilla Manifesto.
> >
> > Jonathan
> >
> > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262#c29
> > [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262#c32
>
> I wrote a post about this issue this morning for EFF: https://www.eff.org/deeplinks/2019/02/cyber-mercenary-groups-shouldnt-be-trusted-your-browser-or-anywhere-else
>
> Given DarkMatter's business interest in intercepting TLS communications adding them to the trusted root list seems like a very bad idea. (I would go so far as revoking their intermediate certificate as well, based on these revelations.)

I would also like to have a comment from the current root owner
(digicert?) on what they plan to do with it.


Kurt

Scott Rea

unread,
Feb 23, 2019, 5:08:00 AM2/23/19
to Wayne Thayer, mozilla-dev-security-policy
G’day Wayne et al,

In response to your post overnight (included below), I want to assure you that DarkMatter’s work is solely focused on defensive cyber security, secure communications and digital transformation. We have never, nor will we ever, operate or manage non-defensive cyber activities against any nationality.

Furthermore, in the spirit of transparency, we have published all our public trust TLS certificates to appropriate CT log facilities (including even all our OV certificates) before this was even a requirement. We have been entirely transparent in our operations and with our clients as we consider this a vital component of establishing and maintaining trust.

We have used FIPS certified HSMs as our source of randomness in creating our Authority certificates, so we have opened an investigation based on Corey Bonnell’s earlier post regarding serial numbers and will produce a corresponding bug report on the findings.

I trust this answers your concerns and we can continue the Root inclusion onboarding process.


Regards,


--

Scott Rea

On 2/23/19, 1:21 AM, "dev-security-policy on behalf of Wayne Thayer via dev-security-policy" <dev-security-...@lists.mozilla.org on behalf of dev-secur...@lists.mozilla.org> wrote:

The recent Reuters report on DarkMatter [1] has prompted numerous questions
about their root inclusion request [2]. The questions that are being raised
are equally applicable to their current status as a subordinate CA under
QuoVadis (recently acquired by DigiCert [3]), so it seems appropriate to
open up a discussion now. The purpose of this discussion is to determine if
Mozilla should distrust DarkMatter by adding their intermediate CA
certificates that were signed by QuoVadis to OneCRL, and in turn deny the
pending root inclusion request.

The rationale for distrust is that multiple sources [1][4][5] have provided
credible evidence that spying activities, including use of sophisticated
targeted surveillance tools, are a key component of DarkMatter’s business,
and such an organization cannot and should not be trusted by Mozilla. In
the past Mozilla has taken action against CAs found to have issued MitM
certificates [6][7]. We are not aware of direct evidence of misused
certificates in this case. However, the evidence does strongly suggest that
misuse is likely to occur, if it has not already.

Mozilla’s Root Store Policy [8] grants us the discretion to take actions
based on the risk to people who use our products. Despite the lack of
direct evidence of misissuance by DarkMatter, this may be a time when we
should use our discretion to act in the interest of individuals who rely on
our root store.

Scott Rea | Senior Vice President - Trust Services
Tel: +971 2 417 1417 | Mob: +971 52 847 5093
Scot...@darkmatter.ae

The information transmitted, including attachments, is intended only for the person(s) or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and destroy any copies of this information.

_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy










ferenn...@gmail.com

unread,
Feb 23, 2019, 9:30:23 AM2/23/19
to mozilla-dev-s...@lists.mozilla.org
On Friday, February 22, 2019 at 10:21:24 PM UTC+1, Wayne Thayer wrote:
> We are not aware of direct evidence of misused
> certificates in this case. However, the evidence does strongly suggest that
> misuse is likely to occur, if it has not already.

So, basing the trust of a CA on "suggestion" and crystal-ball like "looking into the future" (asserting they _will_ abuse their power) without a shred of conclusive evidence is considered good practice, now? Aren't the rules for admission of a CA in root stores there for a reason (among others to keep the process objective)?
Not like all the other ones in the root stores have spotless historical records either. Far from it.

> I don't see how approving them, or the continued trust in their intermediates, would be in the interests of Mozilla's users or compatible with the Mozilla Manifesto.

Oh come on. Mozilla itself isn't compatible with the Mozilla Manifesto.

Also, I don't see how a corporate organization's manifesto should have any bearing on the truststore used in many independent FOSS operating systems and applications. Mozilla might not agree with many things based on political bias and let's leave that out the door, shall we? Or do you want to start refusing or distrusting CAs that have any sort of affiliation with right-wing political parties next?

Kurt Roeckx

unread,
Feb 23, 2019, 9:46:14 AM2/23/19
to Scott Rea, Wayne Thayer, mozilla-dev-security-policy
On Sat, Feb 23, 2019 at 02:07:38PM +0400, Scott Rea via dev-security-policy wrote:
> G’day Wayne et al,
>
> In response to your post overnight (included below), I want to assure you that DarkMatter’s work is solely focused on defensive cyber security, secure communications and digital transformation. We have never, nor will we ever, operate or manage non-defensive cyber activities against any nationality.

Can you explain what you mean with defensive cyber security and
how this relates to the CA?


Kurt

alex....@gmail.com

unread,
Feb 23, 2019, 9:51:21 AM2/23/19
to mozilla-dev-s...@lists.mozilla.org
(Writing in my personal capacity)

One of the things that I think is important is to tease out factual predicates that could be grounds for exclusion. It's clear to me that there is a tremendous level of unease with DarkMatter, largely (though not exclusively!) as a result of the Reuter's article. Being able to articulate which conduct described there is incompatible with participation in the Mozilla Root Program.

I propose two answers (to start with :-)):

First, is honesty. Even as we build technologies such as CT and audit regimes which improve auditability and accountability, CAs are ultimately in the business of trust. https://twitter.com/josephfcox/status/1090592247379361792 makes the argument that DarkMatter has been in the business of lying to journalists. Lying is fundamentally incompatible with trust.

Second, is vulnerability exploitation. The Reuter's article describes use of the "Karma" malware/exploits against iOS. It's difficult for me to imagine anyone in the business of using iOS 0days that doesn't also have a few Firefox exploits up their sleeve (possibly in the form of Tor Browser 0days). This is a leap, but not a big one. Using Firefox 0days in the wild (particularly against the sort of targets alleged: human rights activists, journalists, etc.) is not compatible with Mozilla's mission, or our parochial interest in the security of our own software.

Alex

Todd Troxell

unread,
Feb 23, 2019, 3:38:51 PM2/23/19
to mozilla-dev-s...@lists.mozilla.org
IDK this seems like an obvious one to me. Let them find another way. We don't have to make it easy.

-Todd

Scott Rea

unread,
Feb 24, 2019, 12:34:59 AM2/24/19
to Kurt Roeckx, Wayne Thayer, mozilla-dev-security-policy
G’day Kurt,

DarkMatter has several business units that focus on a broad range of cyber security activities. The Trust Services BU is responsible for the DarkMatter CA and primarily focused on enabling secure communications and digital transformation. We utilize the services of other DM BU’s who are primarily focused on defensive cyber security activities e.g. Cyber Network Defense and Managed Security Services to protect and ensure the integrity of the CA operations.

Regards,


--

Scott Rea



Scott Rea | Senior Vice President - Trust Services
Tel: +971 2 417 1417 | Mob: +971 52 847 5093
Scot...@darkmatter.ae

The information transmitted, including attachments, is intended only for the person(s) or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and destroy any copies of this information.

Nex

unread,
Feb 24, 2019, 5:09:09 AM2/24/19
to dev-secur...@lists.mozilla.org
On 2/23/19 11:07 AM, Scott Rea via dev-security-policy wrote:
> G’day Wayne et al,
>
> In response to your post overnight (included below), I want to assure you that DarkMatter’s work is solely focused on defensive cyber security, secure communications and digital transformation. We have never, nor will we ever, operate or manage non-defensive cyber activities against any nationality.
>
> Furthermore, in the spirit of transparency, we have published all our public trust TLS certificates to appropriate CT log facilities (including even all our OV certificates) before this was even a requirement. We have been entirely transparent in our operations and with our clients as we consider this a vital component of establishing and maintaining trust.
>
> We have used FIPS certified HSMs as our source of randomness in creating our Authority certificates, so we have opened an investigation based on Corey Bonnell’s earlier post regarding serial numbers and will produce a corresponding bug report on the findings.
>
> I trust this answers your concerns and we can continue the Root inclusion onboarding process.

For clarity, are you rejecting all of the following articles and blog
posts as false and fabricated?

1. https://www.reuters.com/investigates/special-report/usa-spying-raven/
2.
https://theintercept.com/2016/10/24/darkmatter-united-arab-emirates-spies-for-hire/
3.
https://www.evilsocket.net/2016/07/27/How-The-United-Arab-Emirates-Intelligence-Tried-to-Hire-me-to-Spy-on-its-People/

I don't mean to be cynical, but a personal assurance vs. the amounting
evidence and sources spanning over years, isn't a very convincing argument.

Best,
C.

cwbu...@gmail.com

unread,
Feb 24, 2019, 1:00:55 PM2/24/19
to mozilla-dev-s...@lists.mozilla.org
This seems like an absolute no-brainer to me. DarkMatter's past behavior and line of business are fundamentally incompatible with the level of trust reposed in CA's. This is not even a close call. I believe Mozilla should:
1. Deny the root inclusion request;
2. Add the intermediate CA certificates that were signed by QuoVadis to OneCRL; and
3. Demand an explanation from DigiCert as to why the intermediate CA certificates were issued in the first place and why they remain unrevoked.

eve....@gmail.com

unread,
Feb 24, 2019, 1:01:10 PM2/24/19
to mozilla-dev-s...@lists.mozilla.org
This certificate already infested all major browsers. Removing it breaks a lot of pages and gives you Invalid certificate error.

TOR, Chrome, Firefox... all infested.

named...@gmail.com

unread,
Feb 24, 2019, 1:01:27 PM2/24/19
to mozilla-dev-s...@lists.mozilla.org
Op zaterdag 23 februari 2019 20:38:51 UTC schreef Todd Troxell:
> IDK this seems like an obvious one to me. Let them find another way. We don't have to make it easy.
>
> -Todd

It should also be noted DarkMatter has very strong ties with the UAE government and operates the UAE national PKI (https://ca.darkmatter.ae/UAE/index.html).

Corey Bonnell

unread,
Feb 24, 2019, 3:50:42 PM2/24/19
to mozilla-dev-s...@lists.mozilla.org
I would like to bolster my previous assertion that the serial number generation scheme used in the DarkMatter certificate hierarchy likely does not meet the requirements set forth in the Baseline Requirements, section 7.1.

A further analysis of all DarkMatter-issued certificates which were logged to Certificate Transparency logs with a notBefore date of 2016-09-30 or later was performed. This certificate corpus of 235 unique certificates (pre-certificate and final certificate pairs are considered to be a single “unique certificate” to avoid double-counting) is overwhelmingly comprised of end-entity TLS certificates, but there are also several infrastructure-related certificates (such as OCSP Response Signing certificates, etc.) included. DarkMatter has asserted that all publicly trusted TLS certificates that they have issued are logged to Certificate Transparency logs, so this set of 235 unique certificates includes the entirety of publicly trusted TLS certificates issued by DarkMatter since 2016-09-30.

This analysis has revealed that all 235 unique certificates have a serial number of 8 octets (64 bits) and a big-endian most significant bit set to 0. Given that the probability of all 64 bits being output from a CSPRNG with a most significant bit value of 0 for all 235 such certificates is 1 in 2^235, it is extremely likely that these certificates do not contain the minimum number of bits (64) output from a CSPRNG and are therefore mis-issued under the Baseline Requirements.

A comprehensive list of the 235 unique certificates can be found here: https://gist.github.com/CBonnell/1f01ccd93667c37800b67e518340c606

Furthermore, two of the intermediates issued to DarkMatter which chain to QuoVadis/Digicert roots violate RFC 5280, section 4.2.1.10 (https://tools.ietf.org/html/rfc5280#section-4.2.1.10). Specifically, the dNSName in the nameConstraints extension's permittedSubtrees field contains a leading period (".ae"), which violates the hostname syntax specified in section 4.2.1.10. Therefore, these two intermediate certificates (https://crt.sh/?id=23432430&opt=cablint, https://crt.sh/?id=19415522&opt=cablint) are also mis-issued under the Baseline Requirements.

I have sent a Certificate Problem Report to Digicert to notify them of these findings, as these intermediates and DarkMatter-issued certificates chain to roots under their control.

Thanks,
Corey

Scott Rea

unread,
Feb 24, 2019, 7:39:34 PM2/24/19
to Corey Bonnell, mozilla-dev-s...@lists.mozilla.org
G’day Corey,

I did not check your math, but is it possible that you are interpreting the serial number conversion output as an unsigned integer representation? If so, then I can understand your potential concern regarding the findings of your analysis.

DarkMatter uses an EJBCA platform with the requisite setting for 64-bit random serial numbers and our source of entropy is a FIPS140 certified HSM, so I too was surprised by the findings you reported. However, during our investigation of this potential issue, we have thus far discovered that the platform appears to be compliant with the requisite standard, and the anomaly you are highlighting is potentially due just to the integer representation you are using in your calculations.

RFC5280 (section 4.1.2.2) defines serialNumber to be a positive INTEGER, and X.690 defines INTEGER to consist of one or more octets and (specifically section 8.3.3) says the octets shall be a two’s complement binary number equal to the integer value. Using the two’s complementary representation means that the output of the octet conversion is a signed integer, and it could be positive or negative – the range of integers from 64-bit numbers being from –(2^63) to [+ (2^63)-1]. But since the RFC requires only positive integers, the 64-bits of output from the CSPRNG function must eventuate only in positive numbers, and negative numbers cannot be used. In two’s complement representation, the leading bit determines whether the number is positive or negative – for positive numbers, the leading bit will always be zero (if it’s a 1, then that represents a negative number which RFC5280 prohibits).

So our findings are that the platform is indeed using 64-bit output from an appropriate CSPRNG for generating serialNumbers, and that the leading zero is exactly what is required to indicate that it is a positive number in two’s complement representation of the INTEGER, which is the requirement under RFC5280. Therefore our findings indicate that the serial number generation scheme used by DarkMatter in it’s certificate hierarchy does meet the requirements set forth in the Baseline Requirements, section 7.1.


Regards,


--

Scott Rea

On 2/25/19, 12:50 AM, "dev-security-policy on behalf of Corey Bonnell via dev-security-policy" <dev-security-...@lists.mozilla.org on behalf of dev-secur...@lists.mozilla.org> wrote:

On Friday, February 22, 2019 at 4:28:30 PM UTC-5, Corey Bonnell wrote:
I would like to bolster my previous assertion that the serial number generation scheme used in the DarkMatter certificate hierarchy likely does not meet the requirements set forth in the Baseline Requirements, section 7.1.

A further analysis of all DarkMatter-issued certificates which were logged to Certificate Transparency logs with a notBefore date of 2016-09-30 or later was performed. This certificate corpus of 235 unique certificates (pre-certificate and final certificate pairs are considered to be a single “unique certificate” to avoid double-counting) is overwhelmingly comprised of end-entity TLS certificates, but there are also several infrastructure-related certificates (such as OCSP Response Signing certificates, etc.) included. DarkMatter has asserted that all publicly trusted TLS certificates that they have issued are logged to Certificate Transparency logs, so this set of 235 unique certificates includes the entirety of publicly trusted TLS certificates issued by DarkMatter since 2016-09-30.

This analysis has revealed that all 235 unique certificates have a serial number of 8 octets (64 bits) and a big-endian most significant bit set to 0. Given that the probability of all 64 bits being output from a CSPRNG with a most significant bit value of 0 for all 235 such certificates is 1 in 2^235, it is extremely likely that these certificates do not contain the minimum number of bits (64) output from a CSPRNG and are therefore mis-issued under the Baseline Requirements.

A comprehensive list of the 235 unique certificates can be found here: https://gist.github.com/CBonnell/1f01ccd93667c37800b67e518340c606

Furthermore, two of the intermediates issued to DarkMatter which chain to QuoVadis/Digicert roots violate RFC 5280, section 4.2.1.10 (https://tools.ietf.org/html/rfc5280#section-4.2.1.10). Specifically, the dNSName in the nameConstraints extension's permittedSubtrees field contains a leading period (".ae"), which violates the hostname syntax specified in section 4.2.1.10. Therefore, these two intermediate certificates (https://crt.sh/?id=23432430&opt=cablint, https://crt.sh/?id=19415522&opt=cablint) are also mis-issued under the Baseline Requirements.

I have sent a Certificate Problem Report to Digicert to notify them of these findings, as these intermediates and DarkMatter-issued certificates chain to roots under their control.

Thanks,
Corey


Scott Rea | Senior Vice President - Trust Services
Tel: +971 2 417 1417 | Mob: +971 52 847 5093
Scot...@darkmatter.ae

The information transmitted, including attachments, is intended only for the person(s) or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and destroy any copies of this information.

Scott Rea

unread,
Feb 24, 2019, 8:05:10 PM2/24/19
to Corey Bonnell, mozilla-dev-s...@lists.mozilla.org
G’day Corey,

In respect to the previously issued constrained intermediates – can you clarify where in RFC5280 Section 4.2.1.10 that the prohibition against a leading period is specified for the name constraints?
I see in the RFC the specific sentence: “When the constraint begins with a period, it MAY be expanded with one or more labels.” This appears to contradict your assertion that leading period constraints violate 5280…

During the period that these intermediates were deployed, the browsers and platforms dependent on these performed path processing exactly as expected with this configuration.

Can you please illuminate the passage in the RFC where you feel a leading period in a permitted subtree e.g. (“.ae”) as was used in the identified intermediate certificates, is a violation??

Regards,


--

Scott Rea

On 2/25/19, 12:50 AM, "dev-security-policy on behalf of Corey Bonnell via dev-security-policy" <dev-security-...@lists.mozilla.org on behalf of dev-secur...@lists.mozilla.org> wrote:

Furthermore, two of the intermediates issued to DarkMatter which chain to QuoVadis/Digicert roots violate RFC 5280, section 4.2.1.10 (https://tools.ietf.org/html/rfc5280#section-4.2.1.10). Specifically, the dNSName in the nameConstraints extension's permittedSubtrees field contains a leading period (".ae"), which violates the hostname syntax specified in section 4.2.1.10. Therefore, these two intermediate certificates (https://crt.sh/?id=23432430&opt=cablint, https://crt.sh/?id=19415522&opt=cablint) are also mis-issued under the Baseline Requirements.

I have sent a Certificate Problem Report to Digicert to notify them of these findings, as these intermediates and DarkMatter-issued certificates chain to roots under their control.




Corey Bonnell

unread,
Feb 24, 2019, 8:48:44 PM2/24/19
to mozilla-dev-s...@lists.mozilla.org
Hi Scott,
Thank you for the prompt response and the transparency in regard to the software stack used by your CA operations. The detailed response that you provided will hopefully make it easier to highlight the disconnect we have.

You are correct that ASN.1 INTEGERs are 2's-complement signed integers. Every DarkMatter-issued certificate that I've encountered (both those chained to Digicert roots as well as your roots as well as the DarkMatter root certificates themselves) has an INTEGER data size of exactly 8 octets (64 bits). By outputting 64 random bits from the CSPRNG and then coercing the most significant bit to 0 (to make the INTEGER value positive, as you mentioned) means that the CA software is discarding one bit from the CSPRNG (since the most significant bit is being coerced to 0) and embedding only 63 bits of CSPRNG output in the certificate serial number. Section 7.1 of the Baseline Requirements requires at least 64 bits output from a CSPRNG, so I do not believe the serial number generation scheme that you have described is compliant.

Thanks,
Corey

Corey Bonnell

unread,
Feb 24, 2019, 8:57:17 PM2/24/19
to mozilla-dev-s...@lists.mozilla.org
Hi Scott,
The verbiage from RFC 5280, section 4.2.1.10 that you quoted is in regard to URI GeneralNames, as the paragraph starts with "For URIs...".

The relevant paragraph in section 4.2.1.10 that specifies the required syntax of dNSNames in nameConstraints and explains why the two intermediates are non-compliant is as follows:

"DNS name restrictions are expressed as host.example.com. Any DNS
name that can be constructed by simply adding zero or more labels to
the left-hand side of the name satisfies the name constraint. For
example, www.host.example.com would satisfy the constraint but
host1.example.com would not."

As you can see, there is no provision for encoding a leading period in dNSNames. Several certificate linters detect this particular problem, which you can see demonstrated in the two links I provided to the two intermediates' crt.sh entries.

Thanks,
Corey

Scott Rea

unread,
Feb 24, 2019, 9:11:55 PM2/24/19
to Corey Bonnell, mozilla-dev-s...@lists.mozilla.org
G’day Corey,

I am not sure if the phrase “…outputting 64 random bits from the CSPRNG and then coercing the most significant bit to 0” is actually an accurate representation of what is happening under the covers – we have asked for clarification from the developers so we can all have an informed discussion (I know that DM is not the only CA using this platform). My anticipation is that what happens is that CSPRNG process is repeated until a positive INTEGER is returned. In which case a 64-bit output from a CSPRNG is contained in the serialNumber as is required. Please note, the requirement is not a 64-bit number, but that a 64-bit output from a CSPRNG process is contained in the serialNumber, and we believe this is exactly what is happening.


Regards,


--

Scott Rea

On 2/25/19, 5:48 AM, "dev-security-policy on behalf of Corey Bonnell via dev-security-policy" <dev-security-...@lists.mozilla.org on behalf of dev-secur...@lists.mozilla.org> wrote:

Hi Scott,
Thank you for the prompt response and the transparency in regard to the software stack used by your CA operations. The detailed response that you provided will hopefully make it easier to highlight the disconnect we have.

You are correct that ASN.1 INTEGERs are 2's-complement signed integers. Every DarkMatter-issued certificate that I've encountered (both those chained to Digicert roots as well as your roots as well as the DarkMatter root certificates themselves) has an INTEGER data size of exactly 8 octets (64 bits). By outputting 64 random bits from the CSPRNG and then coercing the most significant bit to 0 (to make the INTEGER value positive, as you mentioned) means that the CA software is discarding one bit from the CSPRNG (since the most significant bit is being coerced to 0) and embedding only 63 bits of CSPRNG output in the certificate serial number. Section 7.1 of the Baseline Requirements requires at least 64 bits output from a CSPRNG, so I do not believe the serial number generation scheme that you have described is compliant.

Scott Rea

unread,
Feb 24, 2019, 9:50:08 PM2/24/19
to Corey Bonnell, mozilla-dev-s...@lists.mozilla.org
G’day Corey,

I can see your point – perhaps the more accurate way explicitly allowed under 5280 would have been to encode the constraint as type uniformResourceIdentifier rather than the type dNSName that was used.
I don’t recall if we actually tried that in our tests at the time with QV, but I do know we had some debate about how to best reflect the desired constraints because there did not seem to be any decent examples that we could find in the wild as to how others were achieving a country level restriction, and configuration that was finally settled on existed as an example on one the groups, and in testing it provided the desired results.

Even though the dNSName example in 5280 does not explicitly prohibit the leading “.” the example provided would lead most folks to that conclusion, and that is obviously how the linters are interpreting it.

These two Intermediates were re-signed without the nameConstriant extensions after we realized most organizations based in the UAE are often using .com or .org anyway to host their sites, and therefore we couldn’t effectively meet the needs of local customers. So these two have not been distributed for a couple of years now anyway.



Regards,


--

Scott Rea

On 2/25/19, 5:57 AM, "dev-security-policy on behalf of Corey Bonnell via dev-security-policy" <dev-security-...@lists.mozilla.org on behalf of dev-secur...@lists.mozilla.org> wrote:

Hi Scott,
The verbiage from RFC 5280, section 4.2.1.10 that you quoted is in regard to URI GeneralNames, as the paragraph starts with "For URIs...".

The relevant paragraph in section 4.2.1.10 that specifies the required syntax of dNSNames in nameConstraints and explains why the two intermediates are non-compliant is as follows:

"DNS name restrictions are expressed as host.example.com. Any DNS
name that can be constructed by simply adding zero or more labels to
the left-hand side of the name satisfies the name constraint. For
example, www.host.example.com would satisfy the constraint but
host1.example.com would not."

As you can see, there is no provision for encoding a leading period in dNSNames. Several certificate linters detect this particular problem, which you can see demonstrated in the two links I provided to the two intermediates' crt.sh entries.

Thanks,
Corey




Matt Palmer

unread,
Feb 24, 2019, 9:55:40 PM2/24/19
to dev-secur...@lists.mozilla.org
On Mon, Feb 25, 2019 at 02:11:40AM +0000, Scott Rea via dev-security-policy wrote:
> My anticipation is that what happens is that CSPRNG process is repeated
> until a positive INTEGER is returned. In which case a 64-bit output from
> a CSPRNG is contained in the serialNumber as is required.

That is not any better than just setting the MSB to zero. Imagine if a CA
said "we generate a 64-bit serial by getting values from the CSPRNG
repeatedly until the value is one greater than the previously issued
certificate, and use that as the serial number.". It's hard to imagine that
that would be considered sufficient, and it's fundamentally the same as the
process you're describing.

> Please note, the requirement is not a 64-bit number, but that a 64-bit
> output from a CSPRNG process is contained in the serialNumber, and we
> believe this is exactly what is happening.

If the process is repeatedly asking for a value from the CSPRNG until it
gets one it "likes", then no, you're not using 64 bits of output from a
CSPRNG. The value may be 64 bits long, but not all 64 of those bits came from
the CSPRNG -- some of the bits came from the acceptability test, not the
CSPRNG.

- Matt

Peter Gutmann

unread,
Feb 24, 2019, 10:26:02 PM2/24/19
to Matt Palmer, dev-secur...@lists.mozilla.org
Matt Palmer via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>Imagine if a CA said "we generate a 64-bit serial by getting values from the
>CSPRNG repeatedly until the value is one greater than the previously issued
>certificate, and use that as the serial number.".

Well, something pretty close to that works for Bitcoin (the relation is <
rather than >). Come to think of it, you could actually mine cert serial
numbers, and then record them in a public blockchain, for auditability of
issued certificates.

(Note: This is satire. I'm not advocating using blockchain anything for
anything other than (a) pump-and-dump digital currency schemes and (b)
attracting VC funding).

Peter.

Scott Rea

unread,
Feb 25, 2019, 4:00:23 AM2/25/19
to Corey Bonnell, mozilla-dev-s...@lists.mozilla.org
G’day Corey,

To follow up on this thread, we have confirmed with the developers of the platform that the approach used to include 64-bit output from a CSPRNG in the serialNumber is to generate the required output and then test it to see if it can be a valid serialNumber. If it is not a valid serialNumber, it is discarded, and new value is generated. This process is repeated until the first valid serialNumber is produced.

This process ensures that 64 bits output from a CSPRNG is used to generate each serialNumber that gets used, and this is complaint with the BRS Section 7.1.

I will also point out that if the returned value is a valid as a serialNumber, it is further checked to see if that value has not been used before, since there is obviously a minimal chance of collision in any truly random process. In this case the serialNumber value will also be discarded and the process repeated.

I think it reasonable to expect that EVERY implementation of a compliant CA software is doing this post-processing to ensure the intended serialNumber has not already been used, and this is not something unique to EJBCA. As such, every CA out there will have some process that requires post-processing of whatever value is returned with a possibility to have to repeat the process if there is a collision.

Regards,


--

Scott Rea



Scott Rea | Senior Vice President - Trust Services
Tel: +971 2 417 1417 | Mob: +971 52 847 5093
Scot...@darkmatter.ae

The information transmitted, including attachments, is intended only for the person(s) or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and destroy any copies of this information.

On 2/25/19, 6:11 AM, "Scott Rea" <Scot...@darkmatter.ae> wrote:

G’day Corey,

I am not sure if the phrase “…outputting 64 random bits from the CSPRNG and then coercing the most significant bit to 0” is actually an accurate representation of what is happening under the covers – we have asked for clarification from the developers so we can all have an informed discussion (I know that DM is not the only CA using this platform). My anticipation is that what happens is that CSPRNG process is repeated until a positive INTEGER is returned. In which case a 64-bit output from a CSPRNG is contained in the serialNumber as is required. Please note, the requirement is not a 64-bit number, but that a 64-bit output from a CSPRNG process is contained in the serialNumber, and we believe this is exactly what is happening.


Regards,


--

Scott Rea

Paul Kehrer

unread,
Feb 25, 2019, 4:32:31 AM2/25/19
to mozilla-dev-s...@lists.mozilla.org
Hi Scott,

Comments inline.

On February 25, 2019 at 4:58:00 PM, Scott Rea via dev-security-policy (
dev-secur...@lists.mozilla.org) wrote:

G’day Corey,

To follow up on this thread, we have confirmed with the developers of the
platform that the approach used to include 64-bit output from a CSPRNG in
the serialNumber is to generate the required output and then test it to see
if it can be a valid serialNumber. If it is not a valid serialNumber, it is
discarded, and new value is generated. This process is repeated until the
first valid serialNumber is produced.

This process ensures that 64 bits output from a CSPRNG is used to generate
each serialNumber that gets used, and this is complaint with the BRS
Section 7.1.

This approach (assuming it is accurately described) discards exactly half
of all values, thus halving the address space. That means there are 63-bits
of entropy, so I do not agree that this process is compliant with the
baseline requirements. More generally, RFC 5280 allows up to 20 octets in
the serial number field so why are you choosing to issue on the lower bound?



I will also point out that if the returned value is a valid as a
serialNumber, it is further checked to see if that value has not been used
before, since there is obviously a minimal chance of collision in any truly
random process. In this case the serialNumber value will also be discarded
and the process repeated.

I don't believe all public CAs do collision detection because many have
chosen to implement serial generation such that collision is highly
improbable. For example, a CA may choose to generate a 160-bit value and
clamp the high bit to zero. This provides 159-bits of entropy, with a
collision probability of roughly 1 in 2 ** 79.5. Alternately, a CA might
choose to issue with 80-bits of entropy concatenated with a 64-bit
nanosecond time resolution timestamp. This provides 1 in 2 ** 40 collision
probability for any given nanosecond. As a final example, Let's Encrypt's
Boulder CA generates a 136-bit random value and prefixes it with an 8-bit
instance ID:
https://github.com/letsencrypt/boulder/blob/a9a0846ee92efa01ef6c6e107d2e69f4ddbea7c0/ca/ca.go#L511-L532

1 in 2 ** 79.5 is roughly as probable as a randomly generated number
successfully passing typical Miller-Rabin primality testing while in
reality being composite. This is not a risk we worry about when creating
new root keys.

Scott Rea

unread,
Feb 25, 2019, 5:43:04 AM2/25/19
to Paul Kehrer, mozilla-dev-s...@lists.mozilla.org
G’day Paul,

I cannot speak for other CAs, I can only surmise what another CA that is as risk intolerant as we are might do. For us, we will collision test since there is some probability of a collision and the test is the only way to completely mitigate that risk.
There is a limitation in our current platform that sets the serialNumber bit-size globally, however we expect a future release will allow this to be adjusted per CA. Once that is available, we can use any of the good suggestions you have made below to adjust all our Public Trust offerings to move to larger entropy on serialNumber determination.

However, the following is the wording from Section 7.1 of the latest Baseline Requirements:
“Effective September 30, 2016, CAs SHALL generate non-sequential Certificate serial numbers greater than zero (0) containing at least 64 bits of output from a CSPRNG.”

Unless we are misreading this, it does not say that serialNumbers must have 64-bit entropy as output from a CSPRNG, which appears to be the point you and others are making. If that was the intention, then perhaps the BRs should be updated accordingly?

We don’t necessarily love our current situation in respect to entropy in serialNumbers, we would love to be able to apply some of the solutions you have outlined, and we expect to be able to do that in the future. However we still assert that for now, our current implementation of EJBCA is still technically complaint with the BRs Section 7.1 as they are written. Once an update for migration to larger entropy serialNumbers is available for the platform, we will make the adjustment to remove any potential further isssues.

Regards,


--

Scott Rea

Jakob Bohm

unread,
Feb 25, 2019, 9:14:05 AM2/25/19
to mozilla-dev-s...@lists.mozilla.org
On 25/02/2019 11:42, Scott Rea wrote:
> G’day Paul,
>
> I cannot speak for other CAs, I can only surmise what another CA that is as risk intolerant as we are might do. For us, we will collision test since there is some probability of a collision and the test is the only way to completely mitigate that risk.
> There is a limitation in our current platform that sets the serialNumber bit-size globally, however we expect a future release will allow this to be adjusted per CA. Once that is available, we can use any of the good suggestions you have made below to adjust all our Public Trust offerings to move to larger entropy on serialNumber determination.
>
> However, the following is the wording from Section 7.1 of the latest Baseline Requirements:
> “Effective September 30, 2016, CAs SHALL generate non-sequential Certificate serial numbers greater than zero (0) containing at least 64 bits of output from a CSPRNG.”
>
> Unless we are misreading this, it does not say that serialNumbers must have 64-bit entropy as output from a CSPRNG, which appears to be the point you and others are making. If that was the intention, then perhaps the BRs should be updated accordingly?
>
> We don’t necessarily love our current situation in respect to entropy in serialNumbers, we would love to be able to apply some of the solutions you have outlined, and we expect to be able to do that in the future. However we still assert that for now, our current implementation of EJBCA is still technically complaint with the BRs Section 7.1 as they are written. Once an update for migration to larger entropy serialNumbers is available for the platform, we will make the adjustment to remove any potential further isssues.
>
> Regards,
>
>

I believe the commonly accepted interpretation of the BR requirement is
to ensure that for each new certificate generated, there are at least
2**64 possible serial numbers and no way for anyone involved to predict
or influence which one will be used.

This rule exists to prevent cryptographic attacks on the algorithms used
for signing the certificates (such as RSA+SHA256), and achieves this
protection because of the location of the serial number in the
certificate structure.

If all the serial numbers are strictly in the range 0x0000000000000001
to 0x7FFFFFFFFFFFFFFF then there is not enough protection of the signing
algorithm against these attacks.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Tim Shirley

unread,
Feb 25, 2019, 10:51:41 AM2/25/19
to Scott Rea, Corey Bonnell, mozilla-dev-s...@lists.mozilla.org
There are other ways to achieve a guarantee of non-collision besides re-generating. For example, we incorporate the timestamp of issuance into the serial number alongside the random bits. You could also incorporate a sequential value into your serial number. Both methods serve to guarantee that, even in the extremely unlikely event that you duplicate the random component of your serial number in 2 different certificates, you still have 2 different serial numbers.

But at least 64 bits of whatever is produced needs to be entropy, and if any "acceptability test" is applied after the random value is generated and actually rejects a value, then you've reduced your number of effective bits of entropy. From what has been described here, it seem clear that in this case what's actually being generated is 63 bits of entropy. Any process truly generating 64 bits of entropy should be producing serial numbers with 9 octets ~50% of the time.

Regards,

Tim Shirley
Software Architect
t: +1 412.395.2234

www.securetrust.com <http://www.securetrust.com>

Introducing the Global Compliance Intelligence Report <https://www2.trustwave.com/Global-Compliance-Intelligence-Report-Registration.html>

Nick Lamb

unread,
Feb 25, 2019, 11:17:40 AM2/25/19
to dev-secur...@lists.mozilla.org, Kurt Roeckx
On Sat, 23 Feb 2019 10:16:27 +0100
Kurt Roeckx via dev-security-policy
<dev-secur...@lists.mozilla.org> wrote:
> I would also like to have a comment from the current root owner
> (digicert?) on what they plan to do with it.

Two other things would be interesting from Digicert on this topic

1. To what extent does DarkMatter have practical ability to issue
independently of Digicert?

https://crt.sh/?caid=22507

It would be nice to know where this is on the spectrum of intermediate
CAs, between the cPanel intermediate (all day-to-day operations
presumably by Sectigo and nobody from cPanel has the associated RSA
private keys) and Let's Encrypt X3 (all day-to-day operations by Let's
Encrypt / ISRG and presumably nobody from IdenTrust has the associated
RSA private keys)


2. Does Digicert agree that currently misissuances, even on seemingly
minor technical issues like threadbare random serial numbers are their
problem, since they are the root CA and ultimately responsible for this
intermediate ?


Nick.

Rob Stradling

unread,
Feb 25, 2019, 11:53:05 AM2/25/19
to Nick Lamb, dev-secur...@lists.mozilla.org, Kurt Roeckx
On 25/02/2019 16:17, Nick Lamb via dev-security-policy wrote:
> On Sat, 23 Feb 2019 10:16:27 +0100
> Kurt Roeckx via dev-security-policy
> <dev-secur...@lists.mozilla.org> wrote:
>> I would also like to have a comment from the current root owner
>> (digicert?) on what they plan to do with it.
>
> Two other things would be interesting from Digicert on this topic
>
> 1. To what extent does DarkMatter have practical ability to issue
> independently of Digicert?
>
> https://crt.sh/?caid=22507
>
> It would be nice to know where this is on the spectrum of intermediate
> CAs, between the cPanel intermediate (all day-to-day operations
> presumably by Sectigo and nobody from cPanel has the associated RSA
> private keys)

Hi Nick. I can confirm that all day-to-day operations for the cPanel
intermediates are performed by Sectigo, and nobody from cPanel has the
associated RSA private keys.

> and Let's Encrypt X3 (all day-to-day operations by Let's
> Encrypt / ISRG and presumably nobody from IdenTrust has the associated
> RSA private keys)
<snip>

QuoVadis disclosed [1] that...

"The DarkMatter CAs were previously hosted and operated by QuoVadis, and
included in the QuoVadis WebTrust audits through 2017. In November 2017,
the CAs were transitioned to DarkMatter’s own control following
disclosure to browser root programs."

I take that to mean that DarkMatter are in possession of the RSA private
key corresponding to https://crt.sh/?caid=22507.


[1] https://www.quovadisglobal.com/QVRepository/ExternalCAs.aspx

--
Rob Stradling
Senior Research & Development Scientist
Sectigo Limited

rich...@gmail.com

unread,
Feb 25, 2019, 1:01:02 PM2/25/19
to mozilla-dev-s...@lists.mozilla.org
Apart from the concerns others have already raised, I am bothered by the wording of one of the Dark Matter commitments, which says that "TLS certs intended for public trust" will be logged. What does public trust mean? Does it include certificates intended only for use within their country? Those intended to be used only on a small, privately-specified, set of recipients?

Perhaps a better way to phrase my question is: what certs would DM issue that would *not* be subject to their CT logging SOP?

Is there any other trusted root that has made a similar exemption?

Matthew Hardeman

unread,
Feb 25, 2019, 1:07:38 PM2/25/19
to rich...@gmail.com, mozilla-dev-security-policy
The answer to the question of what certificates they intend to CT log or
not may be interesting as a point of curiosity, but the in-product CT
logging requirements of certain internet browsers (Chrome, Safari) would
seem to ultimately force them to CT log the certificates that are intended
to be trusted by a broad set of internet browsers.

Matthew Hardeman

unread,
Feb 25, 2019, 2:02:19 PM2/25/19
to Richard Salz, mozilla-dev-security-policy
On Mon, Feb 25, 2019 at 12:15 PM Richard Salz <rich...@gmail.com> wrote:

> You miss the point of my question.
>
> What types of certs would they issue that would NOT expect to be trusted
> by the public?
>
>>
>>>
I get the question in principle. If it is a certificate not intended for
public trust, I suppose I wonder whether or not it's truly in scope for
policy / browser inclusion / etc discussions?

Buschart, Rufus

unread,
Feb 25, 2019, 3:08:04 PM2/25/19
to mozilla-dev-security-policy
> Von: dev-security-policy <dev-security-...@lists.mozilla.org> Im Auftrag von Matthew Hardeman via dev-security-policy
If the certificate is part of a hierarchy that chains up to a root under Mozillas Root Program there should be no question about this - yes it is in scope.

With best regards,
Rufus Buschart

Siemens AG
Information Technology
Human Resources
PKI / Trustcenter
GS IT HR 7 4
Hugo-Junkers-Str. 9
90411 Nuernberg, Germany
Tel.: +49 1522 2894134
mailto:rufus.b...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive Officer; Roland Busch, Lisa Davis, Klaus Helmrich, Janina Kugel, Cedrik Neike, Michael Sen, Ralf P. Thomas; Registered offices: Berlin and Munich, Germany; Commercial registries: Berlin Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322



Jeremy Rowley

unread,
Feb 25, 2019, 3:43:45 PM2/25/19
to Buschart, Rufus, mozilla-dev-security-policy
Hi all,

Sorry for the delayed response. Been traveling and haven't had a chance to
properly format my thoughts until now.

As you all know, DigiCert recently acquired the Quovadis CA. As the operator
of the CA, DigiCert is responsible for the issuing CA controlled by
DarkMatter. DarkMatter controls the keys of the intermediate, have their own
CPS, and undergo their own annual audits.

With one exception, DigiCert has a policy against signing externally
operated intermediate CAs capable of issuing TLS certificates. The exception
is for browser operators who want a cross-sign, which is not applicable in
this case. We've found that limiting external CAs provides a better
situation for users and the security community. As we've done in the past,
we feel there is a logical and legal way to address inherited contracts and
externally operating CAs so that we do not leave site operators and relying
parties using those sites in a lurch. We are committed to working with
DarkMatter on a plan that works for them and aligns to our policy.

As for DarkMatter, we have not received direct evidence of any intentional
mis-issuance or use of a certificate for a MiTM attack. As a policy, we do
not revoke certificates based purely on allegations of wrongdoing. This is
true regardless of source, such as trademark disputes, government requests
for revocation, and similar matters. We do revoke certificates after
receiving a formal request for revocation from a trust store operator. I
think this policy is a logical approach as the policy avoids government
influence and public pressure over unpopular sites while acknowledging the
browser's control over their root store.

If we are presented with direct evidence of serious BR violations or
wrongdoing, we will certainly act in the best interests of user security and
revoke the intermediate. Wrongdoing warranting revocation definitely
includes using the issuing CA to intercept communication where the
certificate holder does not control the domain.

We already received a formal report on the serial number issue. This issue
reminds me of past discussion
(https://groups.google.com/forum/#!searchin/mozilla.dev.security.policy/entr
opy%7Csort:date/mozilla.dev.security.policy/UnR98QjWQQs/Afk5pDHqAQAJ) which
did not warrant removal from the root store. We plan on filling a bug report
on the incident and working with DarkMatter to ensure their systems are
compliant with the BRs. This includes their replacing non-compliant
certificates in accordance with 4.9.1.1. Considering that the certificates
resulted from a misreading of the BRs rather than a malicious act and the
historical precedent in not removing CAs for entropy issues, I don't think
there is justification to revoke the CA. IWe, of course, expect DarkMatter
to update their systems and work with the Mozilla community in providing all
information necessary to resolve concerns about their operation.

Hopefully that answers questions you have. Feel free to ask me anything.

Jeremy
_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Jeremy Rowley

unread,
Feb 25, 2019, 3:48:18 PM2/25/19
to Buschart, Rufus, mozilla-dev-security-policy
If DarkMatter is issuing from a CA that chains to a Quovadis root trusted by
Mozilla, the issuance is in scope of the Mozilla policy. But that also
means the cert is publicly trusted. Thus, I read it as "all TLS certs issued
from the public ICA are publicly logged", which matches what Scott told me
in the past.

You really can't log private certs as they don't chain to a root trusted by
any of the CT logs.

Jeremy Rowley

unread,
Feb 25, 2019, 5:11:14 PM2/25/19
to mozilla-dev-security-policy
One other thing I wanted to get ahead of is that we are revoking three Dark
Matter issuing CAs tomorrow. This revocation was planned well before this
discussion started. These three certificates were issued in 2016 with
improper name constraints. The 2017 certificates currently used are
replacements for those certificates without any name constraints. The three
certificates are:

CN=DarkMatter Assured CA,O=DarkMatter LLC,C=AE
4812bd923ca8c43906e7306d2796e6a4cf222e7d 2024-04-29 22:53:00
6b6fa65b1bdc2a0f3a7e66b590f93297b8eb56b9
CN=DarkMatter High Assurance CA,O=DarkMatter LLC,C=AE
093c61f38b8bdc7d55df7538020500e125f5c836 2024-04-29 22:38:11
8835437d387bbb1b58ff5a0ff8d003d8fe04aed4
CN=DarkMatter Secure CA,O=DarkMatter LLC,C=AE
093c61f38b8bdc7d55df7538020500e125f5c836 2024-04-29 22:45:18
6a2c691767c2f1999b8c020cbab44756a99a0c41

Jeremy

-----Original Message-----
From: dev-security-policy <dev-security-...@lists.mozilla.org> On
-----Original Message-----
From: dev-security-policy <dev-security-...@lists.mozilla.org> On

Scott Rea

unread,
Feb 26, 2019, 5:20:59 AM2/26/19
to Tim Shirley, Corey Bonnell, mozilla-dev-s...@lists.mozilla.org
G’day folks,

we appreciate the many suggestions made on the list to strengthen the entropy of random serialNumbers.

One challenge we face currently is that our platform (which does support higher entropy) but only supports this at a global level. So if we make a global change, then ALL our CAs will use the larger serialNumbers and this would have an impact for example on CAs which are in completely different hierarchies to those used for Public Trust to have to also adopt the change (and for CA’s used for constrained environments e.g. IoT, the size of each extension has an impact).

However, we have been working with our platform provider and can now report that effective beginning of next week, DarkMatter will move to using random 128-bit serial numbers for all our Public Trust certificates.

The remaining question is what should be done if anything about existing certificates with 64-bit serialNumbers?



Regards,


--

Scott Rea

Scott Rea

unread,
Feb 26, 2019, 5:21:46 AM2/26/19
to rich...@gmail.com, mozilla-dev-s...@lists.mozilla.org
G’day Rich,

DM has submitted Roots intended for Public Trust to Mozilla and other browser operators, but we also operate private trust PKIs under separate anchors. These private PKIs also issue certificates to secure TLS in closed environments, but Private Roots are not in public CT Logs and therefore these private TLS certs are not logged.

Regards,


--

Scott Rea

On 2/25/19, 9:59 PM, "dev-security-policy on behalf of rich.salz--- via dev-security-policy" <dev-security-...@lists.mozilla.org on behalf of dev-secur...@lists.mozilla.org> wrote:

Apart from the concerns others have already raised, I am bothered by the wording of one of the Dark Matter commitments, which says that "TLS certs intended for public trust" will be logged. What does public trust mean? Does it include certificates intended only for use within their country? Those intended to be used only on a small, privately-specified, set of recipients?

Perhaps a better way to phrase my question is: what certs would DM issue that would *not* be subject to their CT logging SOP?

Is there any other trusted root that has made a similar exemption?


Scott Rea

unread,
Feb 26, 2019, 9:02:53 AM2/26/19
to Richard Salz, mozilla-dev-s...@lists.mozilla.org
G’day Rich,

This is correct with one qualification – every TLS cert chained to the submitted Roots are CT logged. The exception is that we also issue Public Trust client certificates (through a separate Issuing CA) and these are not required to be logged. From memory, our EV’s currently go to 4 different logs, and OVs got to 3 different logs. We don’t do DV at this time.

Regards,

--
Scott Rea


Scott Rea
Senior Vice President - Trust Services

[cid:image44...@2f49b9ba.4485a5f4]<http://www.darkmatter.ae>

Level 15, Aldar HQ
Abu Dhabi, United Arab Emirates
T +971 2 417 1417<tel:+971%202%20417%201417>
M +971 52 847 5093<tel:+971%2052%20847%205093>
E Scot...@darkmatter.ae<mailto:Scot...@darkmatter.ae>

darkmatter.ae<http://darkmatter.ae>

[Linkedin]<https://www.linkedin.com/company/dark-matter-llc> [Twitter] <https://twitter.com/GuardedbyGenius>
[Year of Zayed] [expo]



The information in this email is intended only for the person(s) or entity to whom it is addressed and may contain confidential or privileged information. If you receive this email by error, please notify us immediately, delete the original message and do not disclose the contents to any other person, use or store or copy the information in any medium and for whatever purpose. Any unauthorized use is strictly prohibited.

From: Richard Salz <rich...@gmail.com>
Date: Tuesday, February 26, 2019 at 5:31 PM
To: Scott Rea <Scot...@darkmatter.ae>
Cc: <mozilla-dev-s...@lists.mozilla.org>
Subject: Re: DarkMatter Concerns

So then every cert signed by the keys intended for the trust store will be CT logged?

Scott Rea

unread,
Feb 26, 2019, 9:33:57 AM2/26/19
to Tim Shirley, Corey Bonnell, mozilla-dev-s...@lists.mozilla.org
G’day folks,

we appreciate the many suggestions made on the list to strengthen the entropy of random serialNumbers.

One challenge we face currently is that our platform (which does support higher entropy) but only supports this at a global level. So if we make a global change, then ALL our CAs will use the larger serialNumbers and this would have an impact for example on CAs which are in completely different hierarchies to those used for Public Trust to have to also adopt the change (and for CA’s used for constrained environments e.g. IoT, the size of each extension has an impact).

However, we have been working with our platform provider and can now report that effective beginning of next week, DarkMatter will move to using random 128-bit serial numbers for all our Public Trust certificates.

The remaining question is what should be done if anything about existing certificates with 64-bit serialNumbers?



Regards,


--

Scott Rea

On 2/25/19, 7:51 PM, "dev-security-policy on behalf of Tim Shirley via dev-security-policy" <dev-security-...@lists.mozilla.org on behalf of dev-secur...@lists.mozilla.org> wrote:

There are other ways to achieve a guarantee of non-collision besides re-generating. For example, we incorporate the timestamp of issuance into the serial number alongside the random bits. You could also incorporate a sequential value into your serial number. Both methods serve to guarantee that, even in the extremely unlikely event that you duplicate the random component of your serial number in 2 different certificates, you still have 2 different serial numbers.

But at least 64 bits of whatever is produced needs to be entropy, and if any "acceptability test" is applied after the random value is generated and actually rejects a value, then you've reduced your number of effective bits of entropy. From what has been described here, it seem clear that in this case what's actually being generated is 63 bits of entropy. Any process truly generating 64 bits of entropy should be producing serial numbers with 9 octets ~50% of the time.

Regards,

Tim Shirley
Software Architect
t: +1 412.395.2234

www.securetrust.com <http://www.securetrust.com>

Introducing the Global Compliance Intelligence Report <https://www2.trustwave.com/Global-Compliance-Intelligence-Report-Registration.html>


On 2/25/19, 3:58 AM, "dev-security-policy on behalf of Scott Rea via dev-security-policy" <dev-security-...@lists.mozilla.org on behalf of dev-secur...@lists.mozilla.org> wrote:

I think it reasonable to expect that EVERY implementation of a compliant CA software is doing this post-processing to ensure the intended serialNumber has not already been used, and this is not something unique to EJBCA. As such, every CA out there will have some process that requires post-processing of whatever value is returned with a possibility to have to repeat the process if there is a collision.



Scott Rea

unread,
Feb 26, 2019, 10:06:09 AM2/26/19
to mozilla-dev-security-policy
G’day Folks,

DarkMatter CEO (Karim Sabbagh), has provided an official response to Mozilla on the recent media article about the UAE that referenced security and intelligence matters. Per Wayne’s request to potentially share this on the list, I am attaching a copy of that letter to this post.

Regards,

--
Scott Rea

Mike Kushner

unread,
Feb 26, 2019, 10:06:40 AM2/26/19
to mozilla-dev-s...@lists.mozilla.org
Hi,

Since EJBCA as a product was mentioned we thought we could chime in with some background and updates.

EJBCA was possible the first (certainly one of the first) CA products to use random serial numbers. From the very beginning, 64 bit random serial numbers, from a CSPRNG, were used. This was back in the days when the opinion was that sequential serial numbers had to be used. We had some work convincing some users that random was better than sequential. By default 8 bytes (64 bits) were used, as it's a nice power of 2 number. Back then longer serial numbers were frowned upon and there were product out there that could not even parse 16 byte serial numbers, so 8 bytes was a good choice that was compatible.
As years went by we introduced the possibility to use longer serial numbers, through a configuration setting at build time. This setting is no convenient to use (automatically persisted across upgrades etc) in our Appliance and cloud platforms, and we had on the roadmap to remedy this with a more use friendly setting.

A very strong goal of EJBCA, and PrimeKey, is to be standard compliant with both open standards and regulations, and thus we are not completely satisfied with debating 63 vs 64 bits.
We are planning to release a fix version with a convenient per CA setting, and default to larger serials, in short time. Any existing CA will be easily configured to use between 4 and 20 octets serial numbers for future issuance. Serial numbers must be compliant with RFC5280 and X.690, which is the test we do on the output from the CSPRNG, only using compliant ones.
https://jira.primekey.se/browse/ECA-4991

In addition to random serial numbers, EJBCA checks for collisions, so even in the very unlikely event of equal serial numbers being generated, no certificates with duplicated serial numbers should be issued from a CA based on the EJBCA software. By comparison, collisions do happen regularly in testing when using 32 bit serial numbers (and are averted), so the underlying checks function as we expect.

Looking back at the origin to when public CAs moved form sequential to random serial numbers we believe this originated from the weakness in SHA1, making a preimage attack at least theoretically possible against SHA1. EJBCA was very early on supporting SHA256 as well. To our knowledge there are no known preimage attacks against SHA256, correct us if we're wrong. In other words, we do not consider this a security issue, but (only) a compliance issue.

Cheers,
Tomas Gustafsson (CTO)
and
Mike Agrenius Kushner (PO EJBCA)

Richard Salz

unread,
Feb 26, 2019, 10:07:48 AM2/26/19
to Scott Rea, mozilla-dev-s...@lists.mozilla.org

Rob Stradling

unread,
Feb 26, 2019, 10:16:57 AM2/26/19
to Scott Rea, mozilla-dev-security-policy
Hi Scott. It seems that the m.d.s.p list server stripped the
attachment, but (for the benefit of everyone reading this) I note that
you've also attached it to
https://bugzilla.mozilla.org/show_bug.cgi?id=1427262.

Direct link:
https://bug1427262.bmoattachments.org/attachment.cgi?id=9046699

On 26/02/2019 13:56, Scott Rea via dev-security-policy wrote:
> G’day Folks,
>
> DarkMatter CEO (Karim Sabbagh), has provided an official response to Mozilla on the recent media article about the UAE that referenced security and intelligence matters. Per Wayne’s request to potentially share this on the list, I am attaching a copy of that letter to this post.
>
> Regards,
>
>

--

Jonathan Rudenberg

unread,
Feb 26, 2019, 10:18:39 AM2/26/19
to dev-secur...@lists.mozilla.org
On Tue, Feb 26, 2019, at 10:06, Scott Rea via dev-security-policy wrote:
> G’day Folks,
>
> DarkMatter CEO (Karim Sabbagh), has provided an official response to
> Mozilla on the recent media article about the UAE that referenced
> security and intelligence matters. Per Wayne’s request to potentially
> share this on the list, I am attaching a copy of that letter to this
> post.

Since attachments tend to not come through on this list, here's a link: https://bugzilla.mozilla.org/attachment.cgi?id=9046699

This statement does not appear to me to directly deny any of the media reports, it simply calls them "misleading". There is no standard definition for "defensive cyber activities", which could mean a lot of things.

As far as I know DarkMatter has not provided a comment directly to Reuters at all, let alone denied the report.

Jonathan

Paul Wouters

unread,
Feb 26, 2019, 10:34:42 AM2/26/19
to mozilla-dev-s...@lists.mozilla.org
On Tue, 26 Feb 2019, Rob Stradling via dev-security-policy wrote:

> Hi Scott. It seems that the m.d.s.p list server stripped the
> attachment, but (for the benefit of everyone reading this) I note that
> you've also attached it to
> https://bugzilla.mozilla.org/show_bug.cgi?id=1427262.
>
> Direct link:
> https://bug1427262.bmoattachments.org/attachment.cgi?id=9046699

Thanks for sending the link. The letter is uhm interesting. It both
states they cannot say anything for national security reasons, say they
unconditionally comply with national security (implying even if that
violates any BRs) and claims transparency for using CT which is in fact
being forced by browser vendors on them.

"quests to protect their nations" definitely does not exclude "issuing
BR violating improper certificates to ensnare enemies of a particular
nation state".

Now of course, I don't think this is very different from US based
companies that are forced to do the same by their governments, which
is why DNSSEC TLSA can be trusted more (and monitored better) than a
collection of 500+ CAs from all main nation states that are known for
offense cyber capabilities. But you can ignore this as off-topic :)

Paul

Hanno Böck

unread,
Feb 26, 2019, 10:35:11 AM2/26/19
to dev-secur...@lists.mozilla.org, Scott Rea
This statement repeats the claim that you wrote here previously,
specifically:
"I want to assure you that DarkMatter's work is solely focused on
defensive cyber security, secure communications and digital
transformation."

The statement does not comment on the Reuters article, but it is in
stark contradiction to it, as it clearly describes "offensive cyber"
operations by Dark Matter.

I find it very hard to believe that the entire Reuters story is a hoax.
Nobdody has challenged it yet. According to the Reuters article
DarkMatter was asked for a comment and didn't reply.

Either the Reuters story is false or your CEOs statement is false. They
can't both be true.


--
Hanno Böck
https://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: FE73757FA60E4E21B937579FA5880072BBB51E42

Wayne Thayer

unread,
Feb 26, 2019, 10:42:16 AM2/26/19
to Scott Rea, Tim Shirley, Corey Bonnell, mozilla-dev-s...@lists.mozilla.org
Scott,

On Tue, Feb 26, 2019 at 3:21 AM Scott Rea via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> G’day folks,
>
> we appreciate the many suggestions made on the list to strengthen the
> entropy of random serialNumbers.
>
> One challenge we face currently is that our platform (which does support
> higher entropy) but only supports this at a global level. So if we make a
> global change, then ALL our CAs will use the larger serialNumbers and this
> would have an impact for example on CAs which are in completely different
> hierarchies to those used for Public Trust to have to also adopt the change
> (and for CA’s used for constrained environments e.g. IoT, the size of each
> extension has an impact).
>
> However, we have been working with our platform provider and can now
> report that effective beginning of next week, DarkMatter will move to using
> random 128-bit serial numbers for all our Public Trust certificates.
>
> The remaining question is what should be done if anything about existing
> certificates with 64-bit serialNumbers?
>
> I assume you are referring to those certificates containing a serial
number with effectively 63-bits of entropy? They are misissued. BR section
4.9.1.1 provides guidance.

Mozilla provides further guidance here:
https://wiki.mozilla.org/CA/Responding_To_An_Incident

Richard Salz

unread,
Feb 26, 2019, 11:08:03 AM2/26/19
to Scott Rea, mozilla-dev-s...@lists.mozilla.org
Thanks for the clarification.

Corey Bonnell

unread,
Feb 26, 2019, 12:02:24 PM2/26/19
to mozilla-dev-s...@lists.mozilla.org
According to the statement released today, DarkMatter considers their serial number generation logic to be compliant and these certificates are not mis-issued.

hac...@gmail.com

unread,
Feb 26, 2019, 5:10:02 PM2/26/19
to mozilla-dev-s...@lists.mozilla.org
It reminds me of a similar story in my country:
https://security.googleblog.com/2013/12/further-improving-digital-certificate.html

Please add a strict limit to the top-level domains in Firefox...

Matthew Hardeman

unread,
Feb 26, 2019, 7:31:47 PM2/26/19
to mozilla-dev-security-policy
I'd like to take a moment to point out that determination of the beneficial
ownership of business of various sorts (including CAs) can, in quite a
number of jurisdictions, be difficult to impossible (short of initiating
adverse legal proceedings) to determine.

What does this mean for Mozilla's trusted root program or any other root
program for that matter? I submit that it means that anyone rarely knows
to a certainty the nature and extent of ownership and control over a given
business to a high degree of confidence. This is especially true when you
start divorcing equity interest from right of control. (Famous example,
Zuckerberg's overall ownership of Facebook is noted at less than 30% of the
company, yet he ultimately has personal control of more than 70% of voting
rights over the company, the end result is that he ultimately can control
the company and its operations in virtually any respect.)

A number of jurisdictions allow for creating of trusts, etc, for which the
ownership and control information is not made public. Several of those, in
turn, can each be owners of an otherwise normal looking LLC in an innocuous
jurisdiction elsewhere, each holding say, 10% equity and voting rights.
Say there are 6 of those. Well, all six of them can ultimately be proxies
for the same hidden partner or entity. And that partner/entity would
secretly be in full control. Without insider help, it would be very
difficult to determine who that hidden party is.

Having said all of this, I do have a point relevant to the current case.
Any entity already operating a WebPKI trusted root signed SubCA should be
presumed to have all the access to the professionals and capital needed to
create a new CA operation with cleverly obscured ownership and corporate
governance. You probably can not "fix" this via any mechanism.

In a sense, that DarkMatter isn't trying to create a new CA out of the
blue, operated and controlled by them or their ultimate ownership but
rather is being transparent about who they are is interesting.

One presumes they would expect to get caught at misissuance. The record of
noncompliance and misissuance bugs created, investigated, and resolved one
way or another demonstrates quite clearly that over the history of the
program a non-compliant CA has never been more likely to get caught and
dealt with than they are today.

I believe the root programs should require a list of human names with
verifiable identities and corresponding signed declarations of all
management and technical staff with privileged access to keys or ability to
process signing transactions outside the normal flow. Each of those people
should agree to a life-long ban from trusted CAs should they be shown to
take intentional action to produce certificates which would violate the
rules, lead to MITM, etc. Those people should get a free pass if they
whistle blow immediately upon being forced, or ideally immediately
beforehand as they hand privilege and control to someone else.

While it is unreasonable to expect to be able to track beneficial
ownership, formal commitments from the entity and the individuals involved
in day to day management and operations would lead to a strong assertion of
accountable individuals whose cooperation would be required in order to
create/provide a bad certificate. And those individuals could have "skin
in the game" -- the threat of never again being able to work for any CA
that wants to remain in the trusted root programs.

All of Google, Amazon, and Microsoft are in the program. All of these have
or had significant business with at least the US DOD and have a significant
core of managing executives as well as operations staff and assets in the
United States. As such, it is beyond dispute that each of these is
subordinate to the laws and demands of the US Government. Still, none of
these stand accused of using their publicly trusted root CAs to issue
certificates to a nefarious end. It seems that no one can demonstrate that
DarkMatter has or would either. If so, no one has provided any evidence of
that here.

It's beyond dispute that Mozilla's trusted root program rules allow for
discretionary exclusion of a participant without cause. As far as I'm
aware, that hasn't been relied upon as yet.

For technologists and logicians, it should rankle that it might be
necessary to make reliance upon such a provision in order to keep
DarkMatter out. In my mind, it actually calls into question whether they
should be kept out.

As Digicert's representative has already pointed out, the only BR
compliance matter even suggested at this point is the bits-of-entropy in
serial number issue and others have been given a pass on that. While I
suppose you could call this exclusionary and sufficient to prevent the
addition, it would normally be possible for them to create new key pairs
and issuance hierarchy and start again with an inclusion request for those,
avoiding that concern in round 2.

I think the knee-jerk reaction against this inclusion request ignores a
fundamental truth: by way of the QuoVadis signed SubCA, DarkMatter is and
has been operating a publicly trusted CA in the WebPKI for some time now.
What changes when they're directly included that increases the risk above
and beyond the now current presently accepted risk?

I strongly urge that the program contemplate that in the best case scenario
they don't and can't know the true beneficial ownership of the various CAs
and the map of the relationships of those beneficial owners to other
questionable entities. And following that line of thinking, that the
program presuppose that the ownership and governance/control is vested in
the most corruptible, evil people/groups imaginable and structure the
management of trust of CAs in a framework that assumes and accounts for as
much.

Peter Gutmann

unread,
Feb 26, 2019, 9:27:05 PM2/26/19
to mozilla-dev-s...@lists.mozilla.org, Mike Kushner
Mike Kushner via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>EJBCA was possible the first (certainly one of the first) CA products to use
>random serial numbers.

Random serial numbers have been in use for a long, long time, principally to
hide the number of certs a CA was (or wasn't) issuing. Here's the first one
that came up in my collection, from twenty-five years ago:

0 551: SEQUENCE {
4 400: SEQUENCE {
8 9: INTEGER 00 A0 98 0F FC 30 AC A1 02
19 13: SEQUENCE {
21 9: OBJECT IDENTIFIER md5WithRSAEncryption (1 2 840 113549 1 1 4)
32 0: NULL
: }
[...]
81 43: SET {
83 41: SEQUENCE {
85 3: OBJECT IDENTIFIER organizationalUnitName (2 5 4 11)
90 34: PrintableString 'A Free Internet and SET Class 1 CA'
: }
: }
: }
126 26: SEQUENCE {
128 11: UTCTime '9609010000Z'
: Error: Time is encoded incorrectly.

RFC 3280 (2002) explicitly added handling for random data as serial numbers:

Given the uniqueness requirements above, serial numbers can be
expected to contain long integers. Certificate users MUST be able to
handle serialNumber values up to 20 octets.

(20 bytes being a SHA-1 hash, which was the fashion at the time).

Peter.

Scott Rea

unread,
Feb 27, 2019, 1:56:00 AM2/27/19
to Wayne Thayer, mozilla-dev-s...@lists.mozilla.org, Corey Bonnell, Tim Shirley
G’day Wayne et al,

I am not sure why members of the group keep making the claim that these certificates are misused under the BRs.
Corey pointed to the following paragraph in Section 7.1 of the BRs as the source of the control that DM is accused of not complying with:

“Effective September 30, 2016, CAs SHALL generate non-sequential Certificate serial numbers greater than zero (0) containing at least 64 bits of output from a CSPRNG.”

DarkMatter has responded to show that we have actually followed this requirement exactly as it is written. Furthermore, since there seems to be a number of folks on the Group that believe more stringent controls are needed, DM has agreed to move all its public trust certificates to random serialNumbers with double the required entropy following our next change control in the coming week.

It is not a requirement of Section 7.1 that serialNumber contain random numbers with 64-bit entropy – which appears to be the claim you are making. If this was the intention of this section in the BRs then perhaps we can propose such a change to the BRs. perhaps something like the following could be proposed:

“Effective September 30, 2016, CAs SHALL generate non-sequential Certificate serial numbers greater than zero (0) and output from a CSPRNG such that the resulting serialNumber contains at least 64 bits of entropy.”

However, once again, I want to reiterate the current practice of DM for the public trust certificates that we have generated to date:
1. all serial numbers are non-sequential;
2. all serial numbers are greater than zero;
3. all serial numbers contain at least 64 bits of output from a CSPRNG

As such, all DM certificates that Corey specifically highlighted were issued in compliance with the BRs and specifically in compliance with Section 7.1 that Corey quoted.

If there is another requirement in the BRs in respect to serial numbers where it states that they must contain 64 bits of entropy then can you please point this out?


Regards,

--

Scott Rea

On 2/26/19, 7:41 PM, "dev-security-policy on behalf of Wayne Thayer via dev-security-policy" <dev-security-...@lists.mozilla.org on behalf of dev-secur...@lists.mozilla.org> wrote:

>I assume you are referring to those certificates containing a serial
number with effectively 63-bits of entropy? They are misissued. BR section
4.9.1.1 provides guidance.




Matthew Hardeman

unread,
Feb 27, 2019, 2:15:37 AM2/27/19
to Scott Rea, Wayne Thayer, Tim Shirley, Corey Bonnell, mozilla-dev-s...@lists.mozilla.org
The issue I see with that interpretation is that the very same matter has
previously been discussed on this list and resolved quite vocally in the
favor of the other position: that making careful choices about the CSPRNG
output to conform it to mask out the high order bit makes the output of at
least that bit not truly the output of the CSPRNG but rather the output of
the mask.

Pedantically speaking, I actually favor your analysis. But that probably
will do you no favors as to public perception at a time point when your
request for inclusion is at a crucial phase.

Ryan Sleevi

unread,
Feb 27, 2019, 2:55:34 AM2/27/19
to Matthew Hardeman, Corey Bonnell, Scott Rea, Tim Shirley, Wayne Thayer, mozilla-dev-s...@lists.mozilla.org
As Matthew highlights, this is not a new or novel interpretation.

It was introduced in Ballot 164 -
https://cabforum.org/2016/03/31/ballot-164/

The first discussion of this in the CA/B Forum as a Ballot was
https://cabforum.org/pipermail/public/2016-February/006893.html . This
discussion continues through June of that year, when it went to a vote.

During those discussions, there were concerns raised regarding entropy -
which, admittedly, I raised initially, but you can see others sharing
similar concerns. You can also see, however, in the discussions of GUIDs
how those concerns turned out to be well-founded.

Indeed, if you go to April,
https://cabforum.org/pipermail/public/2016-April/007245.html , you can find
discussion about how the maximum legally possible entropy was 159 bits,
precisely for the reason we’re discussing.

While Scott’s proposed language (“entropy”) has problems that the Forum
discussed rather extensively, as to why it would be problematic, the
question of the high but was also discussed during the ballot process. I
believe the conclusion from that discussion was that it would be unlikely
for CAs to rest on exactly 64-bit serials, as they would be able to avoid
any ambiguity or confusion by simply increasing the serial number space
(say, to 65 bits).

Following this ballot, subsequent discussion was had regarding some CAs
that included serials of exactly 64 bits, and how those could not be
compliant.
https://groups.google.com/forum/m/#!msg/mozilla.dev.security.policy/YequotPYLdc/J0K3lUyzBwAJ
is such an example.

tomass...@gmail.com

unread,
Feb 27, 2019, 3:07:57 AM2/27/19
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, February 27, 2019 at 3:27:05 AM UTC+1, Peter Gutmann wrote:
> Mike Kushner via dev-security-policy <dev-secur...@lists.mozilla.org> writes:
>
> >EJBCA was possible the first (certainly one of the first) CA products to use
> >random serial numbers.
>
> Random serial numbers have been in use for a long, long time, principally to
> hide the number of certs a CA was (or wasn't) issuing. Here's the first one
> that came up in my collection, from twenty-five years ago:

Thanks Peter, you have an impressive collection (no irony, it is really cool!).
We still get asked by customers to implement sequential serial numbers from time to time, but it's getting more and more rare.

>
> RFC 3280 (2002) explicitly added handling for random data as serial numbers:

Ha, we were way before RFC3280 :-). Just being geeky, here is the code from EJBCA 1.0 (2001-12-05). CSPRNG, although it was seeded differently at that time (setSeed complements not replaces self-seeding in SecureRandom).

random = SecureRandom.getInstance("SHA1PRNG");
random.setSeed((long)(new Date().getTime()));
byte[] serno = new byte[8];
random.nextBytes(serno);

(https://sourceforge.net/projects/ejbca/files/ejbca1/ejbca-1_0/)

> Given the uniqueness requirements above, serial numbers can be
> expected to contain long integers. Certificate users MUST be able to
> handle serialNumber values up to 20 octets.

I know it's rare, but we've had clients with devices that can not handle more than 32 bits. Not WebPKI of course... Your X509 Style Guide is still a classic read btw.

> (20 bytes being a SHA-1 hash, which was the fashion at the time).

You make a good point there. The mere size of the serial number doesn't say much about the entropy. The auditor has to dig into the details.

Cheers,
Tomas

Thijs Alkemade

unread,
Feb 27, 2019, 6:57:42 AM2/27/19
to mozilla-dev-s...@lists.mozilla.org


On 27 Feb 2019, at 09:07, tomasshredder--- via dev-security-policy <dev-secur...@lists.mozilla.org<mailto:dev-secur...@lists.mozilla.org>> wrote:

On Wednesday, February 27, 2019 at 3:27:05 AM UTC+1, Peter Gutmann wrote:
Mike Kushner via dev-security-policy <dev-secur...@lists.mozilla.org<mailto:dev-secur...@lists.mozilla.org>> writes:

EJBCA was possible the first (certainly one of the first) CA products to use
random serial numbers.

Random serial numbers have been in use for a long, long time, principally to
hide the number of certs a CA was (or wasn't) issuing. Here's the first one
that came up in my collection, from twenty-five years ago:

Thanks Peter, you have an impressive collection (no irony, it is really cool!).
We still get asked by customers to implement sequential serial numbers from time to time, but it's getting more and more rare.


RFC 3280 (2002) explicitly added handling for random data as serial numbers:

Ha, we were way before RFC3280 :-). Just being geeky, here is the code from EJBCA 1.0 (2001-12-05). CSPRNG, although it was seeded differently at that time (setSeed complements not replaces self-seeding in SecureRandom).

random = SecureRandom.getInstance("SHA1PRNG");
random.setSeed((long)(new Date().getTime()));
byte[] serno = new byte[8];
random.nextBytes(serno);

(https://sourceforge.net/projects/ejbca/files/ejbca1/ejbca-1_0/)

(Sorry for continuing this off-topic thread.)

Hello Tomas,

I hope this is indeed not your current implementation and that it wasn’t in use anymore when ballot 164 became effective, because that’s not safe:

https://stackoverflow.com/a/27301156
https://android-developers.googleblog.com/2016/06/security-crypto-provider-deprecated-in.html

> You should never call setSeed before retrieving data from the "SHA1PRNG" in the SUN provider as that will make your RNG (Random Number Generator) into a Deterministic RNG - it will only use the given seed instead of adding the seed to the state. In other words, it will always generate the same stream of pseudo random bits or values.

Best regards,
Thijs

Peter Gutmann

unread,
Feb 27, 2019, 7:19:17 AM2/27/19
to mozilla-dev-s...@lists.mozilla.org, tomass...@gmail.com
tomasshredder--- via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>We still get asked by customers to implement sequential serial numbers from
>time to time, but it's getting more and more rare.

Another reason for using random data, from the point of view of a software
toolkit provider, is that it's the only thing you can guarantee is unique in a
cert since there's no coordination between users over namespace use. A user
can configure their software or CA to have any name they like, and for small-
scale use that's often the case, "Web CA" or something similar. By providing
an unlikely-to-be-duplicated random value as the serial number, you don't run
into problems with Web CA #1's certs clashing with Web CA #2's certs.

In terms of sequential numbers, if for some reason the current serial number
isn't written to permanent storage correctly, or there's a system failure and
when things are restored the record of the last-used serial number is lost or
corrupted, you're in trouble. So overall it just made more sense to use
random values.

Peter.

tomass...@gmail.com

unread,
Feb 27, 2019, 7:47:13 AM2/27/19
to mozilla-dev-s...@lists.mozilla.org

> (Sorry for continuing this off-topic thread.)
>
> Hello Tomas,
>
> I hope this is indeed not your current implementation and that it wasn’t in use anymore when ballot 164 became effective, because that’s not safe:

I tried to say so in my original post, but I see I was not very clear.

Scott Rea

unread,
Feb 27, 2019, 8:10:07 AM2/27/19
to ry...@sleevi.com, Matthew Hardeman, mozilla-dev-s...@lists.mozilla.org, Corey Bonnell, Tim Shirley, Wayne Thayer, Jeremy Rowley
G’day Folks,

I want to thank Ryan for sharing the relevant discussion history regarding the change that precipitated the current language for serialNumber entropy in the BRs. Based on this background, it is clear to DM what is required for expected compliance with this control and that this was unilaterally applied across all CAs. (I will save the discussion for when we should take the BRs at face value and when we must delve into the history of each specific ballot precipitating a change to another time – this is a potentially difficult challenge for newly applying members who do not have the benefit of a history of participation to fall back on).

DM has already committed to updating our platform with double the required entropy for serialNumber as soon as practically possible. We have determined that with the help of our vendor, we can achieve this within the next 24 hours.
DM has currently stopped issuance of any public trust TLS certificates until the serialNumber entropy is updated. DM has compiled a list of all certificates issued that are still valid and have a 64-bit serialNumber. We have begun contacting each certificate owner to advise them that their certificate will be revoked within 24 hrs, and that DM will provide a replacement cert for them.
A challenge we face is that it is end of day Wednesday here, and Thursday is the end of the Work Week in UAE. It may not be possible to contact all certificate admins in the next 24 hours, and scheduling emergency updates in infrastructures at short notice may have a low probability of success. Since the vulnerability associated with the entropy of serialNumber I believe should only manifest during issuance and not for operation of the certificates, we may potentially hold off revoking the affected certificate until the appropriate admin can be reached so as to minimize risk to our customers and their services. We expect the majority of active certificates to be replaced within 24 hours as required, and will inform the Group if there are any outstanding revocations that do not meet this timeline.

In terms of our Root submission to Mozilla (and other browsers) we will need to re-engage our WebTrust Auditors to be present before replacement hierarchies can be established. This may take up to a couple of weeks. Updated hierarchies will be resubmitted to the root inclusion process once they are available.

Further, we will post a bug report to capture these actions and subsequent updates.


Regards,


--

Scott Rea

Alex Gaynor

unread,
Feb 27, 2019, 9:31:05 AM2/27/19
to Matthew Hardeman, mozilla-dev-security-policy
(Writing in my personal capacity)

On Tue, Feb 26, 2019 at 7:31 PM Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
[...]

> All of Google, Amazon, and Microsoft are in the program. All of these have
> or had significant business with at least the US DOD and have a significant
> core of managing executives as well as operations staff and assets in the
> United States. As such, it is beyond dispute that each of these is
> subordinate to the laws and demands of the US Government. Still, none of
> these stand accused of using their publicly trusted root CAs to issue
> certificates to a nefarious end. It seems that no one can demonstrate that
> DarkMatter has or would either. If so, no one has provided any evidence of
> that here.
>
>
>
I don't think this is well reasoned. There's several things going on here.
First, the United States government's sovereign jurisdiction has nothing to
do with any of these companies' business relationship with it. All would be
subject to various administrative and judicial procedures in any event.
Probably most relevantly, the All Writs Act (see; Apple vs FBI) -- although
it's not at all clear that it would extend to a court being able to compel
a CA to misissue. (Before someone jumps in to say "National Security
Letter", you should probably know that an NSL is an administrative subpoena
for a few specific pieces of a non-content metadata, not a magic catch all.
https://www.law.cornell.edu/uscode/text/18/2709). Again, none of which is
impacted by these company's being government contractors.

Finally, I think there's a point that is very much being stepped around
here. The United States Government, including its intelligence services,
operate under the rule of law, it is governed by both domestic and
international law, and various oversight functions. It is ultimately
accountable to elected political leadership, who are accountable to a
democracy. The same cannot be said of the UAE, which is an autocratic
monarchy. Its intelligence services are not constrained by the rule of law,
and I think you see this reflected in the targetting of surveillance
described in the Reuters article: journalists, human rights activists,
political rivals.

While it can be very tempting to lump all governments, and particularly all
intelligence services, into one bucket, I think it's important we consider
the variety of different ways such services can function.

Alex

Jakob Bohm

unread,
Feb 27, 2019, 10:18:03 AM2/27/19
to mozilla-dev-s...@lists.mozilla.org
On 27/02/2019 01:31, Matthew Hardeman wrote:
> I'd like to take a moment to point out that determination of the beneficial
> ownership of business of various sorts (including CAs) can, in quite a
> number of jurisdictions, be difficult to impossible (short of initiating
> adverse legal proceedings) to determine.
>
> What does this mean for Mozilla's trusted root program or any other root
> program for that matter? I submit that it means that anyone rarely knows
> to a certainty the nature and extent of ownership and control over a given
> business to a high degree of confidence. This is especially true when you
> start divorcing equity interest from right of control. (Famous example,
> Zuckerberg's overall ownership of Facebook is noted at less than 30% of the
> company, yet he ultimately has personal control of more than 70% of voting
> rights over the company, the end result is that he ultimately can control
> the company and its operations in virtually any respect.)
>
> A number of jurisdictions allow for creating of trusts, etc, for which the
> ownership and control information is not made public. Several of those, in
> turn, can each be owners of an otherwise normal looking LLC in an innocuous
> jurisdiction elsewhere, each holding say, 10% equity and voting rights.
> Say there are 6 of those. Well, all six of them can ultimately be proxies
> for the same hidden partner or entity. And that partner/entity would
> secretly be in full control. Without insider help, it would be very
> difficult to determine who that hidden party is.
>

While the ability to adversely extract such information for random
companies is indeed limited by various concerns (including the privacy
of charity activists and small business owners), the ability to get
this information willingly as audited facts is very common, and is
something that (non-technical) auditors are well accustomed to doing.

Thus root programs could easily request that information about
beneficial ownership etc. be included as fully audited facts in future
audit summary letters. Subject to an appropriate launch date so CAs and
auditors can include the needed billable work in their WebTrust audit
pricing, and actually perform this work (typically by sending relevant
people from their business auditing sister operations to do this properly).

> Having said all of this, I do have a point relevant to the current case.
> Any entity already operating a WebPKI trusted root signed SubCA should be
> presumed to have all the access to the professionals and capital needed to
> create a new CA operation with cleverly obscured ownership and corporate
> governance. You probably can not "fix" this via any mechanism.
>
> In a sense, that DarkMatter isn't trying to create a new CA out of the
> blue, operated and controlled by them or their ultimate ownership but
> rather is being transparent about who they are is interesting.
>
> One presumes they would expect to get caught at misissuance. The record of
> noncompliance and misissuance bugs created, investigated, and resolved one
> way or another demonstrates quite clearly that over the history of the
> program a non-compliant CA has never been more likely to get caught and
> dealt with than they are today.
>

- - - -

> I believe the root programs should require a list of human names with
> verifiable identities and corresponding signed declarations of all
> management and technical staff with privileged access to keys or ability to
> process signing transactions outside the normal flow. Each of those people
> should agree to a life-long ban from trusted CAs should they be shown to
> take intentional action to produce certificates which would violate the
> rules, lead to MITM, etc. Those people should get a free pass if they
> whistle blow immediately upon being forced, or ideally immediately
> beforehand as they hand privilege and control to someone else.
>
> While it is unreasonable to expect to be able to track beneficial
> ownership, formal commitments from the entity and the individuals involved
> in day to day management and operations would lead to a strong assertion of
> accountable individuals whose cooperation would be required in order to
> create/provide a bad certificate. And those individuals could have "skin
> in the game" -- the threat of never again being able to work for any CA
> that wants to remain in the trusted root programs.
>

This proposal is disastrously bad and needs to be publicly dispelled
by the root program administrators as quickly as possible for the
following reasons:

1. Punishing all CA operations experts that worked at a failed CA by
making them essentially unemployable for the failures of their
(former) boss is unjust in principle.

2. If there is even a hint that such a policy may be imposed, every
key holding CA employee at every CA will be under duress (by the
root programs) to hide all problems at their CA and defend every
bad decision and mistake of their CA for fear of becoming
blacklisted.

3. People with direct key access are obvious targets of unlawful
coercion, just like bank employees with keys to the vaults. We
really don't want to publish a hit list of whom criminal gangs
(etc.) should target with violence, kidnapping, blackmail etc.
when they want to get malicious certificates for use against high
value targets.

4. If a CA still practices the "off-site split key secret trustees"
way of preventing root key loss, publishing the names of those
trustees would defeat the purpose of that security measure.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Nick Lamb

unread,
Feb 27, 2019, 11:28:25 AM2/27/19
to dev-secur...@lists.mozilla.org, Alex Gaynor
On Wed, 27 Feb 2019 09:30:45 -0500
Alex Gaynor via dev-security-policy
<dev-secur...@lists.mozilla.org> wrote:

> Finally, I think there's a point that is very much being stepped
> around here. The United States Government, including its intelligence
> services, operate under the rule of law, it is governed by both
> domestic and international law, and various oversight functions. It
> is ultimately accountable to elected political leadership, who are
> accountable to a democracy.

So, on my bookshelf I have a large book with the title "The Senate
Intelligence Committee Report On Torture".

That book is pretty clear that US government employees, under the
direction of US government officials and with the knowledge of the
government's executive tortured and murdered people, to no useful
purpose whatsoever. For the avoidance of doubt, those are international
crimes.

Are those employees, officials and executives now... in prison? Did
they face trial, maybe in an international court explicitly created for
the purpose of trying such people?

Er, no, they are honoured members of US society, and in some cases
continue to have powerful US government jobs. The US is committed to
using any measures, legal or not, to ensure none of them see justice.


Sure, there are lots of places where there wouldn't even be a book
published saying "Here are these terrible things we did". But that's a
very low bar. For the purposes of m.d.s.policy we definitely have to
assume that the United States of America very much may choose to
disregard the "rule of law" if it suits those in power to do so.

I don't think the insistence that the UAE is definitively worse than
the US helps this discussion at all. We're not here to publish books
about awful things done by governments years after the fact, we're here
to protect Relying Parties. It is clear they will need protecting from
the US Government _and_ the United Arab Emirates.

Nick.

Matthew Hardeman

unread,
Feb 27, 2019, 12:04:07 PM2/27/19
to Alex Gaynor, mozilla-dev-security-policy
While I was going to respond to the below, Nick Lamb has beaten me to it.
I concur in full with the remarks in that reply.

We should not be picking national favorites as a root program. There's a
whole world out there which must be supported.

What we should be doing is ensuring that we know the parties involved, have
mechanisms for monitoring their compliance, and have mechanisms for
untrusting parties who misissue.

On Wed, Feb 27, 2019 at 8:30 AM Alex Gaynor <aga...@mozilla.com> wrote:

> (Writing in my personal capacity)
>
> I don't think this is well reasoned. There's several things going on here.
> First, the United States government's sovereign jurisdiction has nothing to
> do with any of these companies' business relationship with it. All would be
> subject to various administrative and judicial procedures in any event.
> Probably most relevantly, the All Writs Act (see; Apple vs FBI) -- although
> it's not at all clear that it would extend to a court being able to compel
> a CA to misissue. (Before someone jumps in to say "National Security
> Letter", you should probably know that an NSL is an administrative subpoena
> for a few specific pieces of a non-content metadata, not a magic catch all.
> https://www.law.cornell.edu/uscode/text/18/2709). Again, none of which is
> impacted by these company's being government contractors.
>
> Finally, I think there's a point that is very much being stepped around
> here. The United States Government, including its intelligence services,
> operate under the rule of law, it is governed by both domestic and
> international law, and various oversight functions. It is ultimately
> accountable to elected political leadership, who are accountable to a

Ryan Sleevi

unread,
Feb 28, 2019, 8:55:43 AM2/28/19
to Wayne Thayer, mozilla-dev-security-policy
(Writing in a personal capacity)

I want to preemptively apologize for the length of this message. Despite
multiple rounds of editing, there's still much to be said, and I'd prefer
to say it in public, in the spirit of those past discussions, so that they
can be both referred to and (hopefully) critiqued.

These discussions are no easy matter, as shown from the past conversations
regarding both TeliaSonera [1] and CNNIC [2][3][4][5]. There have been
related discussions [6][7], some of which even discuss the UAE [8]. If you
go through and read those messages, you will find many similar messages,
and from many similar organizations, as this thread has provoked.

In looking at these older discussions, as well as this thread, common
themes begin to emerge. These themes highlight fundamental questions about
what the goals of Mozilla are, and how best to achieve those goals. My hope
is to explore some of these questions, and their implications, so that we
can ensure we're not overlooking any consequences that may result from
particular decisions. Whatever the decision is made - to trust or distrust
- we should at least make sure we're going in eyes wide open as to what may
happen.

1) Objectivity vs Subjectivity

Wayne's initial message calls it out rather explicitly, but you can see it
similarly in positions from past Mozilla representatives - from Gerv
Markham, Sid Stamm, Jonathan Nightingale - and current, such as Kathleen
Wilson. The "it" I'm referring to is the tension between Mozilla's Root
Program, which provides a number of ideally objective criteria for CAs to
meet for inclusion, and the policy itself providing significant leeway for
Mozilla to remove CAs - for any reason or no reason - and to take steps to
protect its users and its mission, an arguably subjective decision. This
approach goes back to the very first versions of the policy, written by
Frank Hecker for the then-brand new Mozilla Foundation [9][10]. Frank
struggled with the issue then [11], so perhaps it is unsurprising that we
still struggle with it now. Thankfully, Frank also documented quite a bit
of his thinking in drafting that policy, so we can have some insight. [12]

The arguments for the application of a consistent and objective policy have
ranged from being a way to keep Mozilla accountable to its principles and
mission [13] to being a way to reduce the liability that might be
introduced by trusting a given CA [14]. In many ways, the aim of having an
objective policy was to provide a transparent insight into how the
decisions are made - something notably absent from programs such as
Microsoft or Apple. For example, you will not find any public discussion as
to why Microsoft continues to add some CAs years ahead of other root
programs, even from organizations so rife with operational failures that
have disqualified them from recognition by Mozilla, yet took years to add
Let's Encrypt, even as they were trusted by other programs and had gone
through public review. Mozilla's policy provides that transparency and
accountability, by providing a set of principles for which the decisions
made can be evaluated against. In theory, by providing this policy and
transparency, Mozilla can serve as a model for other root stores and
organizations - where, if they share those principles, then the application
of the objective policy should lead to the same result, thus leading to a
more interoperable web, thus fulfilling some of Mozilla's core principles.

In the discussions of CNNIC and TeliaSonera, there was often a rigid
application of policy. Unless evidence could be demonstrably provided of
not adhering to the stated policies - for example, misissued certificates -
the presumption of both innocence and trustworthiness was afforded to these
organizations by Mozilla. Factors that were not covered by policy - for
example, participation in providing interception capabilities, the
distribution of malware, supporting censorship - were discarded from the
consideration of whether or not to trust these organizations and include
their CAs.

During those discussions, and in this, there's a counterargument that
highlights these behaviours - or, more commonly, widely reported instances
of these behaviours (which lack the benefit of being cryptographically
verifiable in the way certificates are) - undermine the trustworthiness of
the organization. This argument goes that even of the audit is clean
(unqualified, no non-conformities), such an organization could still behave
inappropriately towards Mozilla users. Audits can be and are being gamed in
such a way as to disguise some known-improper behaviours. CAs themselves
can go from trustworthy to untrustworthy overnight, as we saw with
StartCom's acquisition by WoSign [15]. The principles involved may be
authorizing or engaging in the improper behaviour directly, as we saw with
StartCom/WoSign, or may themselves rely on ambiguous wording to provide an
escape, should they be caught, as we saw with Symantec. Because of this,
the argument goes, it's necessary to consider the whole picture of the
organization, and for the Module Owner and Peers to make subjective
evaluations based on the information available, which may or may not be
reliable or accurate.

The reality is that the policy needs both. As Frank called out in [12], a
major reasoning for the policy was not to provide an abstract checklist,
but to provide a transparent means of decision making, which can consider
and balance the views of the various stakeholders, with the goal of
fulfilling Mozilla's mission. Yet much of that objective policy inherently
rests on something frighteningly subjective - the audit regime that's come
to make up the bulk of the inclusion process. While the early drafts of the
policy considered schemes other than just WebTrust and ETSI, as it's
evolved, we're largely dependent on those two. And both of those rely on
the subjective opinion of auditors, which may or may not meet the Mozilla
community's definition of skilled, to provide their opinion about how well
the CA meets certain abstract audit criteria, for which there may or may
not have been technical guidance provided. As we've seen with the
disqualification of some auditors, those subjective judgements can create
real harm to Mozilla users.

2) Government CAs and Jurisdictional Issues

As the later messages in this thread have shown, but also those early
messages in both TeliaSonera and CNNIC's case, there is a general unease
about the legal implications of CAs and where they operate. This generally
manifests as a concern that a CA will be compelled to do something to
violate policy by force, the instrument of governments. Whether it's
through the rule of law or a "friendly" stop-in by a government official,
the risk is still there. At times, these discussions can be informative,
but not uncommonly, they can devolve into "X country is better than Y
country" on some axis - whether it be rule of law, human rights, encryption
policies, or some other dimension. I don't say that to dismiss the
concerns, but to highlight that they continue to come up every time, and
often have a certain element of subjectivity attached to them.

At times, there's a subtle component to these discussions, which may or may
not get called out, which is about the motivations that the organization
may have in violating the policy. Commercial CAs may be motivated to
violate the policy for financial gain. Government CAs may be motivated to
do so to support their government functions. Quasi-governmental CAs may do
so to ensure their government contracts are maintained. This is a more
useful question, as it tries to unpack what might make the particular risk
- a policy violation - manifest.

It's not unreasonable at all to consider these issues - but in doing so,
the discussion is often masking something deeper, and thus it's necessary
to continue digging down into it.

3) Detection vs Prevention

Another common theme, which plays in with the above two, is a debate
between whether detection is a suitable substitute for prevention. In this
discussion, we see DarkMatter commenting that they've committed to log
certificates via Certificate Transparency. There are fundamental technical
issues with that - most notably, that such a decision does not protect
Mozilla users unless and until Mozilla implements CT - but there's also a
thorny policy issue as well, which is whether or not detection should be a
substitute for prevention. The CNNIC and TeliaSonera discussions equally
wrestled with this, although they lacked the benefit of CT - and a number
of users argued that, in the absence of detection mechanisms, prevention by
not trusting was better than the risk of trusting and not being able to
detect. With DarkMatter's application, there's a theoretical ability to
detect - but a host of practical issues that prevent that, both in the
ecosystem and specifically with Mozilla Firefox.

The counterargument would highlight that this risk exists with literally
every CA trusted by Mozilla today - any one of them could go rogue, and
detection is not guaranteed. Both CT and the audit regimes serve as ways to
try to determine after the fact if that happened, but they're both far from
guaranteed. For example, even in a perfect CT ecosystem of client auditing
and checking and no misbehaving logs, the ability to detect MITM still
wholly relies on site operators scanning those logs for their domain. As
such, CT doesn't necessarily mitigate the risk of MITM - it redistributes
it (somewhat), while providing a more concrete signal.

4) Sub-CAs

Common with these discussions, and certainly shared between CNNIC and
DarkMatter, is the fact that these organizations are already in possession
of publicly trusted intermediate CAs and key material. In some ways,
whatever risk is present, the community has already been living with this
risk, due to the CAs that provided these cross-signs. Whatever may be felt
about the present organizations, it's not difficult to imagine there being
a 'worse' organization equally being granted such access and key material,
and the community may not discover that for between 15-18 months after its
occurred. Whatever decisions are made here, there's a further challenge in
ensuring these decisions are consistently applied. This isn't hypothetical
- unfortunately, we saw Certinomis do exactly this after StartCom was
removed from Mozilla's program [16].

5) Investigative Reporting

Another common theme in these discussions is the role in which
investigative reports should play. There are a number of concerning reports
that, as highlighted by Alex Gaynor, arguably are inconsistent with the
principles and values of Mozilla. DarkMatter is not the first CA to have
such reports published - as the discussions of TeliaSonera and CNNIC show,
there were similar discussions and reports at the time. It's very tempting
to lend credence to these reports, but at the same time, we've also seen
reports, such as those by Bloomberg regarding SuperMicro implants, which
appear more likely to manipulate markets than to be responsible
journalism.[17]

It's not clear to what extent such reports should play in to discussions to
trust or distrust a CA, yet it would seem, based on past evidence, to be
foolish to ignore them in the name of policy absolutism. At the same time,
as the world collectively struggles how to balance media, both in
perceptions and execution, it seems care should be taken as to how much
weight to give such reports.

6) Recommendations, Risks, and Suggestions

Mozilla could decide to rigidly apply the current policy, which it would
seem that DarkMatter is nominally on track to meet, much as it did in the
case of TeliaSonera and CNNIC. However, as those CAs showed, the
application of such policy ultimately resulted in significant risk being
introduced to end users. If we are to just past precedents as future
predictors, then I would suggest that we don't have a good predictive model
to suggest it will go well.

It should come as no surprise that I'm highly skeptical in the audits and
audit regime, and thus do not find them to be objective in the least. I'm
hugely appreciative of the continued collaboration in this community on how
to improve it, but I think there's still significant work to go to get
there. The way that we've been tackling this issue to date in this
community is centered around transparency - transparency in operations and
transparency through incident reports.

Regarding incident reports, I'm deeply concerned about DarkMatter's
engagement regarding non-compliance, which seems to be targeted to excuse,
more than explain. There's a risk that, in saying this, it will be
suggested that this is somehow a ploy to exclude new CAs - but as others
have highlighted, I think ALL CAs, both present and pending, need to
operate beyond approach and with meaningful evaluation. Because of the risk
to the Internet that any and every CA poses, every one of them should be
aggressively looking for best practice. This means that they absolutely
should be evaluating the public discussions that happen in the CA/B Forum
to understand the intent of ballots, just as they should absolutely be
examining every CA incident - both closed and open - for similar things.
The BRs are not a guidebook for "how to be a CA" - they're the minimum
floor for which, if you dip below, there should be no trust.

Regarding transparency of operations, I think DarkMatter has missed a
number of opportunities to meaningfully engage in a way to provide some
reassurances. Transparency is not "We promise we don't do [bad thing]", but
to describe the system and controls for how it is designed. This is far
more extensive than a CP/CPS. Auditable versions of this continue to be
discussed with WebTrust (borrowing concepts and ideas from reporting such
as SOC 2) and ETSI, but there's no reason to not be more transparent now.

Regarding sub-CAs, prior to this thread, I had already filed an issue for
future consideration as a means of ensuring an appropriate level of
transparency before new sub-CAs are introduced [18]. This would
significantly reduce the risk of, say, an organization issuing a sub-CA and
then exiting the CA business, as QuoVadis has done, which I think provides
the wrong incentives for both the CA and the community.

As Frank Hecker's CA metapolicy [12] called out, the document that
ultimately informed the policy that was developed, we should simply treat
audits as one signal regarding the CA's operations, and potentially, not a
particularly strong one. Mozilla would be entirely justified with its
current policy to reject the inclusion request and remove the current (and
any future) sub-CAs. Mozilla could also, arguing that it would be unfair to
hold DarkMatter to a high standard, accept them and bear any perceived
risks, only acting in the case of demonstrable or repeated bad action.
Neither decision would be unprecedented, as I've hopefully highlighted.

One area of consequence I've struggled most with is this: If DarkMatter is
rejected, will that encourage other root programs to apply subjective,
non-transparent criteria, and in doing so, undermine some of Mozilla's
manifesto? In theory, Mozilla is the shining light of transparency and
consistency here, and I'd love to believe that has influenced other root
programs, such as those by Apple and Mozilla. Yet I struggle to see
evidence that this is the case - certainly in trust decisions, but at times
in distrust decisions. More importantly, the more time I spend working with
audits, both to understand the frameworks that lead to the audit regimes
[19][20] and how the audits function, it's clear to me that these were
designed very much to allow for subjectivity, business relationships, and
risks both technical and legal, to be determinants in trust. I think the
past decision making process has perhaps too dogmatically relied on policy
and the existence of audits, rather than the content, level of detail, and
qualifications of the auditors.

With respect to CT, I don't believe any weight should be given to
DarkMatter's commitment to log certificates to CT. While it's true that it
theoretically provides detection, that detection is not at all available to
Mozilla users, and even if it were, it's fundamentally after-the-fact and
if and only if the site operator is monitoring. For the set of concerns
that have been raised here, regardless of whether they are seen as
legitimate, the proposal for CT does not address them in any meaningful
way, nor can it. Even if I'm overlooking something here, Frank's advice in
[12] regarding reliance on new technology not yet implemented should be a
clear warning about the risk such reliance can pose.

Regardless of the decision, this thread has elicited a number of
well-intentioned suggestions for how to improve the policy in a way that
clearly states various expectations that the community may have.
Documenting beneficial ownership is one that I think is critically
necessary, as there are a number of CAs already participating in Mozilla's
program that have arguably complicated relationships. Until recently, for
example, Sectigo's primary ownership was through Francisco Partners, who
also owned the extremely controversial and problematic NSO Group [21][22].
Similarly, when Symantec was a recognized CA in Mozilla's program, their
acquisition of Bluecoat (and subsequent appointment to a number of key
leadership positions) was seen as concerning [23].

Similarly, I think that as long as we allow for new organizations (not
present in the hierarchy already) to be introduced as sub-CAs, then much of
this discussion is largely moot, because there will be an easy policy
loophole to exploit. I believe that it's in the best interests of users
that, regardless of the organization, new independently operated sub-CAs
undergo the same scrutiny and application process as roots. Once they're
in, then I think provided there are paths to trust - both for including
them as roots or allowing them to switch to cross-signatures from other
organizations - but I am concerned about the ability to introduce new and
arbitrary players at will, especially with incentives structures that may
shift all of the burden and risk to Mozilla and its users.

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/mirZzYH5_pI/5LJ-X-XfIdwJ
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=542689
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=476766
[4]
https://groups.google.com/d/msg/mozilla.dev.security.policy/QEwyx6TQ5TM/qzX_WsKwvIgJ
[5]
https://groups.google.com/d/msg/mozilla.dev.security.policy/xx8iuyLPdQk/OvrUtbAkKRMJ
[6]
https://groups.google.com/d/msg/mozilla.dev.security.policy/WveW8iWquxs/S2I8yC36y2EJ
[7]
https://groups.google.com/d/msg/mozilla.dev.security.policy/kRRBCbE-t5o/8gYiB_B1D1AJ
[8]
https://groups.google.com/d/msg/mozilla.dev.security.policy/OBrPLsoMAR8/J4B0CU3JGdoJ
[9] http://hecker.org/mozilla/cert-policy-submitted
[10] http://hecker.org/mozilla/cert-policy-approved
[11] http://hecker.org/mozilla/cert-policy-draft-12
[12] http://hecker.org/mozilla/ca-certificate-metapolicy
[13] https://www.mozilla.org/en-US/about/manifesto/
[14] https://bugzilla.mozilla.org/show_bug.cgi?id=233453
[15] https://wiki.mozilla.org/CA:WoSign_Issues
[16]
https://groups.google.com/d/msg/mozilla.dev.security.policy/RJHPWUd93xE/6yhrL4nXAAAJ
[17]
https://www.businessinsider.com/security-community-voicing-increasing-doubts-about-bombshell-bloomberg-chinese-chip-hacking-2018-10
[18] https://github.com/mozilla/pkipolicy/issues/169
[19]
https://www.americanbar.org/content/dam/aba/events/science_technology/2013/pki_guidelines.pdf
[20] http://www.oasis-pki.org/
[21]
https://groups.google.com/d/msg/mozilla.dev.security.policy/AvGlsb4BAZo/OicDsN2rBAAJ
[22]
https://www.franciscopartners.com/news/nso-group-acquired-by-its-management
[23]
https://groups.google.com/d/msg/mozilla.dev.security.policy/akOzSAMLf_k/Y1D4-RXoAwAJ

Matthew Hardeman

unread,
Feb 28, 2019, 7:31:52 PM2/28/19
to Ryan Sleevi, Wayne Thayer, mozilla-dev-security-policy
I wanted to take a few moments to say that I believe that Ryan Sleevi's
extensive write-up is one of the most meticulously supported and researched
documents that I've seen discuss this particular aspect of trust delegation
decisions as pertains to the various root programs. It is an incredible
read that will likely serve as a valuable resource for some time to come.

An aspect in which I especially and wholeheartedly agree is the commentary
on the special nature of the Mozilla Root CA program and its commitment to
transparency of decisions. This is pretty unique among the trust stores
and I believe it's an extreme value to the community and would love to see
it preserved.

Broadly I agree with most of the substance of the positions Ryan Sleevi
laid out, but do have some thoughts that diverge in some aspects.

Regarding program policy as it now stands, it is not unreasonable to arrive
at a position that the root program would be better positioned to supervise
and sanction DarkMatter as a member Root CA than as a trusted SubCA. For
starters, as a practical matter, membership in the root program does not
offer DarkMatter a privilege or capability that they do not already have
today. (Save for, presumably, a license fee payable or already paid to
QuoVadis/Digicert.) By requiring directly interfacing with Mozilla on any
and all disputes or issues, root program membership would mean Mozilla gets
to make final decisions such as revocation of trust directly against the CA
and can further issue a bulletin putting all other CA program members on
note that DarkMatter (if untrusted) is not, under any circumstances, to be
issued a SubCA chaining to a trusted root. The obvious recent precedent in
that matter is StartCom/StartSSL/WoSign.

On the topic of beneficial ownership I am less enthusiastic for several
reasons:

1. It should not matter who holds the equity and board level control if
the trust is vested in the executive team and the executive team held
accountable by the program. At that level, the CA just becomes an
investment property of the beneficial owners and the executive and
(hopefully) the ownership are aware that their membership in the root CA
program is a sensitive and fragile asset subject to easy total loss of
value should (increasingly easily detectible) malfeasance occur. I submit
that this may be imperfect, but I believe it's far more achievable than
meaningful understanding and monitoring of beneficial ownership.

2. Actually getting a full understanding of the down-to-the-person level
of the beneficial ownership is complex and in some cases might range from
infeasible to impossible. It's possible for even the senior management to
have less than full transparency to this.

3. Even if you can achieve a correct and full understanding of beneficial
ownership, it is inherently a point-in-time data set. There are ways to
transfer off the equity and/or control that may happen in multiple steps or
increments so as to avoid triggering change-of-control reporting, etc.
There are attorneys and accountants who specialize in this stuff and plenty
of legal jurisdictions that actively facilitate.

Ryan Sleevi

unread,
Mar 1, 2019, 12:48:10 PM3/1/19
to Matthew Hardeman, Ryan Sleevi, Wayne Thayer, mozilla-dev-security-policy
On Thu, Feb 28, 2019 at 7:31 PM Matthew Hardeman <mhar...@gmail.com>
wrote:

> Regarding program policy as it now stands, it is not unreasonable to
> arrive at a position that the root program would be better positioned to
> supervise and sanction DarkMatter as a member Root CA than as a trusted
> SubCA. For starters, as a practical matter, membership in the root program
> does not offer DarkMatter a privilege or capability that they do not
> already have today. (Save for, presumably, a license fee payable or
> already paid to QuoVadis/Digicert.) By requiring directly interfacing with
> Mozilla on any and all disputes or issues, root program membership would
> mean Mozilla gets to make final decisions such as revocation of trust
> directly against the CA and can further issue a bulletin putting all other
> CA program members on note that DarkMatter (if untrusted) is not, under any
> circumstances, to be issued a SubCA chaining to a trusted root. The
> obvious recent precedent in that matter is StartCom/StartSSL/WoSign.
>

That is certainly one way to approach it. As I tried to call out with the
related sub-CA issue, the mere existence of third-party Sub-CAs does
fundamentally introduce a risk to any root program and its ability to
oversee its policies. For example, Microsoft requires contractual
negotiations with Root CAs, but sub-CAs provide a loophole to that. Mozilla
goes through detailed CP/CPS reviews, but sub-CAs bypass that. In theory,
the issuing CA accepts the risks and liability, but in practice, we see it
creates considerable difficulty for root programs. While we know that no CA
is too big to fail, we also know that if a widely-used CA encounters issues
- perhaps improper delegation to a sub-CA or RA - the costs and
consequences are borne by the ecosystem.

The approach you highlight is one approach that can be taken - which is to
move to a system where all organizations are root CAs, and no third-party
organizations may exist beneath a hierarchy, as the logical conclusion. I
tried to balance the status quo with a bit more oversight with my proposal
regarding introducing new organizations - which tries to balance the policy
oversight with the cost that it'd incur.


> On the topic of beneficial ownership I am less enthusiastic for several
> reasons:
>
> 1. It should not matter who holds the equity and board level control if
> the trust is vested in the executive team and the executive team held
> accountable by the program. At that level, the CA just becomes an
> investment property of the beneficial owners and the executive and
> (hopefully) the ownership are aware that their membership in the root CA
> program is a sensitive and fragile asset subject to easy total loss of
> value should (increasingly easily detectible) malfeasance occur. I submit
> that this may be imperfect, but I believe it's far more achievable than
> meaningful understanding and monitoring of beneficial ownership.
>

> 2. Actually getting a full understanding of the down-to-the-person level
> of the beneficial ownership is complex and in some cases might range from
> infeasible to impossible. It's possible for even the senior management to
> have less than full transparency to this.
>
> 3. Even if you can achieve a correct and full understanding of beneficial
> ownership, it is inherently a point-in-time data set. There are ways to
> transfer off the equity and/or control that may happen in multiple steps or
> increments so as to avoid triggering change-of-control reporting, etc.
> There are attorneys and accountants who specialize in this stuff and plenty
> of legal jurisdictions that actively facilitate.
>

I think you've got some very legitimate concerns here. However, I think
it's overlooking how closely this relates to policy.

If we believe that the organization - its reputation, its incentives, its
related efforts - matters, then I don't think we can overlook the need to
understand the structure that governs the organization. While it's true
that it's a point-in-time assessment, we've already seen how Section 8 of
the Mozilla Policy attempts to address such matters. This isn't
unprecedented, either - ICANN's Registrar Accreditation Agreements [1] or
PCI's QSA Agreement - provide some models for this. The demonstrable
example of StartCom/WoSign (and their related interests with Qihoo 360)
seems relevant, in which both organizational incentives and the officers
themselves were reason for sanction and concern.

I don't share your skepticism regarding the change of control overhead -
again, Section 8 demonstrates a way to tackle this already, through
continuous disclosure. While I don't suggest this is perfect, the important
element here is to improve transparency. While it's likely true that it can
be gamed, by being transparent (e.g. disclosure by CAs), this facilitates
investigation into misleading claims, as well as relationships that may be
relevant to trust decisions.

To this specific matter, consider this "hypothetical": If DarkMatter is
denied, whether through Root or as a Sub-CA, there is nothing to prevent
them from going and, say, acquiring the business interests of a smaller CA.
While Section 8 provides a number of clauses regarding the disclosure, this
could be done through an arms-length intermediate organization. If they did
not immediately change the CP/CPS, this may not be obvious through Section
8.1's requirements for disclosures, and given the discussion we've seen
both in the application and subsequent discussion about serial numbers, we
may see some interpretative differences as to whether subsequent changes to
the CP/CPS represent a material change in the CA's operations regarding
Section 8.

However, we may not even see changes in the CP/CPS; after all, we may see
principles directly authorize improper behaviour, as we saw with
WoSign/StartCom, and this may not be reflected in the audits themselves -
as we saw, with WoSign/StartCom. So to the extent we think that the
operating organization is relevant to a trust decision, it seems we need
more transparency here in order to make informed decisions.

Alternatively, if the view is that the organization itself is not relevant
- only the CP/CPS and audits are - then I agree that this information may
be seen as useless, because it shouldn't factor in to the decision making.
I don't believe we can argue both though - that the organization is
relevant for a trust decision, but the organizational structure isn't.

I'm not trying to suggest that DarkMatter is or is not untoward. I am,
however, trying to highlight that if the decision is to not trust, because
of the concerns that have been shared on this thread and because of the
organizational affiliation, then there are gaps, both in how ownership is
disclosed and how sub-CAs are issued, that can allow this decision to be
bypassed, and, as highlighted earlier, we've already seen some of that play
out in the past. Even if the decision is ultimately to trust, it may be
worthwhile to close these gaps, so that future policies have flexibility.

In my previous message, I highlighted some of the early materials [3][4]
that have been influential in the design of the current audit and
accreditation schemes. For PKIs in general, the fundamental assumption is
that relationships between two PKIs (e.g. the "Mozilla Root Store PKI" and
the "DarkMatter CA") rests heavily on assumptions of contractual
relationships with strong due-dilligence. The audit schemes are merely
complementary to those business relationships, and the value of an audit is
only recognized when you know who you're dealing with. That's why I think
there is inescapable relevance to the organization and organizational
structure in making and maintaining trust decisions, regardless of how that
plays out regarding DarkMatter, and as a result, much like we require the
disclosure of audits and PKI hierarchies, we should similarly require
disclosure of organizational and decision making hierarchy.

[1]
https://www.icann.org/en/system/files/files/approved-with-specs-27jun13-en.pdf
p64-66
[2]
https://www.pcisecuritystandards.org/documents/QSA_Qualification_Requirements_v3_0.pdf
Appendix
A
[3]
https://www.americanbar.org/content/dam/aba/events/science_technology/2013/pki_guidelines.pdf
[4] http://www.oasis-pki.org/resources/

raven...@gmail.com

unread,
Mar 1, 2019, 4:17:48 PM3/1/19
to mozilla-dev-s...@lists.mozilla.org
I have removed these malicious certificates from firefox and Windows machine and contacted various Anti-Virus companies requesting they mark these certificates as malicious in their scans. I suggest others do the same! Thank you for bringing this to my attention.

bxwa...@gmail.com

unread,
Mar 3, 2019, 3:17:03 PM3/3/19
to mozilla-dev-s...@lists.mozilla.org
On Friday, February 22, 2019 at 2:21:24 PM UTC-7, Wayne Thayer wrote:
> The recent Reuters report on DarkMatter [1] has prompted numerous questions
> about their root inclusion request [2]. The questions that are being raised
> are equally applicable to their current status as a subordinate CA under
> QuoVadis (recently acquired by DigiCert [3]), so it seems appropriate to
> open up a discussion now. The purpose of this discussion is to determine if
> Mozilla should distrust DarkMatter by adding their intermediate CA
> certificates that were signed by QuoVadis to OneCRL, and in turn deny the
> pending root inclusion request.
>
> The rationale for distrust is that multiple sources [1][4][5] have provided
> credible evidence that spying activities, including use of sophisticated
> targeted surveillance tools, are a key component of DarkMatter’s business,
> and such an organization cannot and should not be trusted by Mozilla. In
> the past Mozilla has taken action against CAs found to have issued MitM
> certificates [6][7]. We are not aware of direct evidence of misused
> certificates in this case. However, the evidence does strongly suggest that
> misuse is likely to occur, if it has not already.
>
> Mozilla’s Root Store Policy [8] grants us the discretion to take actions
> based on the risk to people who use our products. Despite the lack of
> direct evidence of misissuance by DarkMatter, this may be a time when we
> should use our discretion to act in the interest of individuals who rely on
> our root store.
>
> I would greatly appreciate everyone's constructive input on this issue.
>
> - Wayne
>
> [1] https://www.reuters.com/investigates/special-report/usa-spying-raven/
>
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262
>
> [3]
> https://groups.google.com/d/msg/mozilla.dev.security.policy/hicp7AW8sLA/KUSn20MrDgAJ
>
> [4]
> https://www.evilsocket.net/2016/07/27/How-The-United-Arab-Emirates-Intelligence-Tried-to-Hire-me-to-Spy-on-its-People/
>
> [5]
> https://theintercept.com/2016/10/24/darkmatter-united-arab-emirates-spies-for-hire/
>
> [6]
> https://groups.google.com/d/msg/mozilla.dev.security.policy/czwlDNbwHXM/Fj-LUvhVQYEJ
>
> [7] https://bugzilla.mozilla.org/show_bug.cgi?id=1232689
> [8]
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/

Insane that this is even being debated. If the floodgates are opened here you will NOT be able to get things back under control.

Matthew Hardeman

unread,
Mar 3, 2019, 5:54:47 PM3/3/19
to bxwa...@gmail.com, mozilla-dev-security-policy
On Sun, Mar 3, 2019 at 2:17 PM bxward85--- via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

>
> Insane that this is even being debated. If the floodgates are opened here
> you will NOT be able to get things back under control.
>

While I can appreciate the passion of comments such as this, I think we're
still back at a core problem:

How can you reconcile this position with the actual program rules &
guidelines? If they're declined on some discretionary basis, you loose the
transparency that's made the Mozilla root program so uniquely valuable.

Other than the relatively minor issues which have already been brought to
light (and presently DarkMatter seems to be contemplating the generation of
a whole new root and issuing heirarchy to address those), where are the
rules violations that would keep them out?

Ryan Sleevi

unread,
Mar 3, 2019, 7:13:29 PM3/3/19
to Matthew Hardeman, bxwa...@gmail.com, mozilla-dev-security-policy
On Sun, Mar 3, 2019 at 5:54 PM Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On Sun, Mar 3, 2019 at 2:17 PM bxward85--- via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>
> >
> > Insane that this is even being debated. If the floodgates are opened here
> > you will NOT be able to get things back under control.
> >
>
> While I can appreciate the passion of comments such as this, I think we're
> still back at a core problem:
>
> How can you reconcile this position with the actual program rules &
> guidelines? If they're declined on some discretionary basis, you loose the
> transparency that's made the Mozilla root program so uniquely valuable.


It is not clear how this follows. As my previous messages tried to capture,
the program is, and has always been, inherently subjective and precisely
designed to support discretionary decisions. These do not seem to
inherently conflict with or contradict transparency.

Even setting aside the examples of inclusions - ones which were designed to
be based on a communal evaluation of risks and benefits - one can look at
the fact that every violation of the program rules and guidelines has not
resulted in CAs being immediately removed. Every aspect of the program,
including the audits, is discretionary in nature.

It would be useful to understand where and how you see the conflict, though.

Buschart, Rufus

unread,
Mar 4, 2019, 5:29:47 AM3/4/19
to Matthew Hardeman, bxwa...@gmail.com, mozilla-dev-security-policy
Writing with my personal hat on:


> -----Ursprüngliche Nachricht-----
> Von: dev-security-policy <dev-security-...@lists.mozilla.org> Im Auftrag von Matthew Hardeman via dev-security-policy
> On Sun, Mar 3, 2019 at 2:17 PM bxward85--- via dev-security-policy < dev-secur...@lists.mozilla.org> wrote:
> > Insane that this is even being debated. If the floodgates are opened
> > here you will NOT be able to get things back under control.
>
> While I can appreciate the passion of comments such as this, I think we're still back at a core problem:
>
> How can you reconcile this position with the actual program rules & guidelines? If they're declined on some discretionary basis, you
> loose the transparency that's made the Mozilla root program so uniquely valuable.
>
> Other than the relatively minor issues which have already been brought to light (and presently DarkMatter seems to be contemplating
> the generation of a whole new root and issuing heirarchy to address those), where are the rules violations that would keep them out?

If I remember correctly, there was once a fundamental agreement not to include new TSP's Root CAs if they don't bring structural benefit for the whole eco system. I can't see, how DarkMatter brings any benefit for the eco system at all, so I believe there is good reason not to accept this root inclusion request.

/Rufus

Wayne Thayer

unread,
Mar 4, 2019, 10:34:26 AM3/4/19
to Buschart, Rufus, Matthew Hardeman, bxwa...@gmail.com, mozilla-dev-security-policy
> If this was the standard, far more inclusion requests would be denied. A
stronger argument along these lines is that we have plenty of CAs, so there
is no good reason to take a risk on one that we lack confidence in.

Matthew Hardeman

unread,
Mar 4, 2019, 11:04:00 AM3/4/19
to Ryan Sleevi, Brett Ward, mozilla-dev-security-policy
On Sun, Mar 3, 2019 at 6:13 PM Ryan Sleevi <ry...@sleevi.com> wrote:

>
> It is not clear how this follows. As my previous messages tried to
> capture, the program is, and has always been, inherently subjective and
> precisely designed to support discretionary decisions. These do not seem to
> inherently conflict with or contradict transparency.
>
> Even setting aside the examples of inclusions - ones which were designed
> to be based on a communal evaluation of risks and benefits - one can look
> at the fact that every violation of the program rules and guidelines has
> not resulted in CAs being immediately removed. Every aspect of the program,
> including the audits, is discretionary in nature.
>
> It would be useful to understand where and how you see the conflict,
> though.
>

I think my disconnect arises in as far as that for the period of time in
which I've tracked the program and this group, I can not recall use of
subjective discretion to deny admission to the program. Any use of a
subjective basis as the lead cause for not including Dark Matter would, to
my admittedly limited time-window of observation in this area, be new
territory.

Ryan Sleevi

unread,
Mar 4, 2019, 11:32:01 AM3/4/19
to Matthew Hardeman, Ryan Sleevi, Brett Ward, mozilla-dev-security-policy
On Mon, Mar 4, 2019 at 11:03 AM Matthew Hardeman <mhar...@gmail.com>
wrote:
Thanks for clarifying!

I think you're absolutely correct, which is that for a number of years, the
program strictly acted based on a rules-based testing: if you met a series
of criteria, which were believed to be objective, then you'd be admitted
into the program, even when there were concerns and objections.

In my previous message, I tried to highlight how the original design of the
program itself was meant to encourage more consideration than merely that
rules-based testing, by highlighting some of the early discussions and
Frank's goals. I also tried to provide examples of threads where past
Mozillan Module Peers or Owners argued in favor of just that - treating it
as a rule-based mechanism - in the hope of providing objective consistency.
However, I also tried to highlight how that the assumption that the
existing "checklist" approach is objective is a flawed assumption - it's
actually resting on a host of subjective elements. I also tried to
highlight how the dogmatic checklist approach has, in turn, left users at
risk, and that CA removals are, to some extent, fundamentally subjective -
since we aren't removing every CA for the most minor infraction, and even
CAs with major infractions have still undergone some discussion.

There was certainly a lot more to say - I ended up cutting several more
pages of discussions and references around audits and root program goals. I
did try to highlight, but perhaps insufficiently, that considering more
factors than just the list-checking was both permitted in the current
policy and anticipated from the beginning of the policy. The items that are
checklists - such as audits - were merely seen as a short-hand way for
Mozilla (and the community) to obtain some degree of assurance that CAs did
what they say. In doing so, I was trying to highlight how the past several
years have shown structural deficiencies from over-reliance on audits to
achieve such a goal, while also trying to briefly highlight that audits -
or at least, the audits we use today - fundamentally weren't meant to be
used in a list-checking way that they were being used.

Wayne's initial message here is, I think, trying to address the
transparency concern, which is why I struggled to see the conflict that I
understood you to be referencing. One can be transparent, while also
considering more than just checkboxes, and I think this discussion is
trying to ensure that transparency and community feedback.

I think there are a couple of core questions, and differences in how these
are answered may perhaps explain some of your discontent:
1) Would denying Dark Matter represent a fundamental change in the Root
Store Policy, or is it consistent with the existing policy?
2) If it does represent a change, should Dark Matter be accepted under the
old policy, or (possibly) rejected under some new policy that might emerge?
3) If Dark Matter was rejected, would the basis for the rejection be
something objective (e.g. perhaps some definition of 'credible' reports
from 'credible' news agencies linking the organization to certain
'objectionable' behaviours, for some value of 'credible' and
'objectionable') or would it be seen as something subjective?

My previous response tried to capture that I think a decision would be
consistent with existing policy (#1), and that the policy wholly allows for
subjective evaluations (#3). The question that I left unanswered was #2 -
but tried to highlight past cases and discussions for which CA inclusions
were deferred until the development or resolution of fundamental policy
questions, and that allows for either answer to #2.

I can totally understand that, given past precedent, it may seem as if the
answer to #1 is that this discussion does represent a meaningful change in
policy, and that because of it, the answer to #2 may be to accept DM now,
while debating what the answer for what #3 should be.

Do you think that captures some of the disconnect you may be seeing?

Wayne Thayer

unread,
Mar 4, 2019, 11:40:11 AM3/4/19
to Matthew Hardeman, Ryan Sleevi, Brett Ward, mozilla-dev-security-policy
On Mon, Mar 4, 2019 at 9:04 AM Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On Sun, Mar 3, 2019 at 6:13 PM Ryan Sleevi <ry...@sleevi.com> wrote:
>
> >
> > It is not clear how this follows. As my previous messages tried to
> > capture, the program is, and has always been, inherently subjective and
> > precisely designed to support discretionary decisions. These do not seem
> to
> > inherently conflict with or contradict transparency.
> >
> > Even setting aside the examples of inclusions - ones which were designed
> > to be based on a communal evaluation of risks and benefits - one can look
> > at the fact that every violation of the program rules and guidelines has
> > not resulted in CAs being immediately removed. Every aspect of the
> program,
> > including the audits, is discretionary in nature.
> >
> > It would be useful to understand where and how you see the conflict,
> > though.
> >
>
> I think my disconnect arises in as far as that for the period of time in
> which I've tracked the program and this group, I can not recall use of
> subjective discretion to deny admission to the program. Any use of a
> subjective basis as the lead cause for not including Dark Matter would, to
> my admittedly limited time-window of observation in this area, be new
> territory.
>
> I was concerned by the idea that discretionary decisions inherently lack
transparency, but it sounds like we are agreeing that is not the case. In
my experience, the approval or denial of a root inclusion request often
comes down to a subjective decision. Some issues exist that could
technically disqualify the request (e.g. DarkMatter's serial number
entropy) and we have to weight the good, 'meh', and bad of the request to
come to a decision. Sometimes we say 'no' (e.g. [1], [2]).

- Wayne

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/wCZsVq7AtUY/Uj1aMht9BAAJ
[2]
https://groups.google.com/d/msg/mozilla.dev.security.policy/fTeHAGGTBqg/l51Nt5ijAgAJ

Matthew Hardeman

unread,
Mar 4, 2019, 11:56:30 AM3/4/19
to Wayne Thayer, Ryan Sleevi, Brett Ward, mozilla-dev-security-policy
My perspective is that of an end user and also that of a software developer
involved in a non-web-browser space in which various devices and
manufacturers generally defer to the Mozilla root program's trust store.
As such, I'm quite certain that my opinions don't -- and should not -- have
the weight that yours or Ryan Sleevi's do.

That said, I do think the principal concern with discretionary basis for
reaching a decision leaves the door open to criticism of favoritism, etc,
all things that transparency helps avoid. As such, I don't personally
consider use of discretion in these matters to be in keeping with
transparency.

I agree that there is the technical matter of the on the serial matters,
but it seems as though DarkMatter has already signaled a willingness to cut
a new hierarchy that wouldn't have that problem. By historical precedent,
that would be more than enough remediation. Discussion also suggests that
there may yet be more CAs with these same serial numbers issues lurking in
the weeds owing to default configuration of the predominant CA software.

While the other two examples get to "no", it is apparent that both of those
cases had a more complicated set of extant issues in the hierarchies which
were being put forth for inclusion/upgrade.

I agree that there's inevitably discretion exercised as to the measure of
sanction or remediation required. But I can't find any recent cases of
discretion effectively barring an entity from inclusion.

Scott Rea

unread,
Mar 5, 2019, 9:15:45 AM3/5/19
to mozilla-dev-security-policy
I have addressed most if not all of the various technical comments in this list in respect to DarkMatter’s Roots submission and it might be helpful if I summarize here the raised Compliance Concerns and Risk of Misuse Concerns:

1. Compliance

Questions have been raised about DarkMatter’s compliance and I want to again emphasize that we have a proven compliance and clean record in the three years of operations and have been through two complete web trust audits that have verified that we are operating within industry standards.

• DM has resolved all technical and policy issues raised in the UAE and DM Roots submission process on Mozilla list: see https://bugzilla.mozilla.org/show_bug.cgi?id=1427262

• Since the inception of the DM CA we have had two technical compliance issues during its three years of operation, both were addressed immediately and resolved.

• The first was that DM misissued 2 TLS certificates that were not in compliance with the BRs as reported by Rob Stradling - specifically the FQDN listed in the CN was not also included in the SAN. The 2 offensive certs were already flagged by DM and were held and revoked and were only in existence for approximately 18hrs and at NO TIME were they LIVE on the internet protecting any services. They were promptly replaced by properly attributed BR-compliant certificates. Comment 31 of the UAE and DM Root inclusion Bug is where the two misissued certs were documented see https://bugzilla.mozilla.org/show_bug.cgi?id=1427262

• The second was the serialNumber entropy raised by Corey Bonnell, this amounted to an interpretation of the BRs standards that was known to CAs following the time of Ballot 164, whereas DM interpreted the BRs as they are written. DM was not privy to the discussions around Ballet 164 and the subsequent community discussions. DM analyzed the BRs post Ballot 164 and concluded based on the definition in Section 7.1, that our platform was compliant with the BRs. This continued to be our position until Ryan Sleevi posted historical data regarding the interpretations provided to other CAs after the adoption of Ballet 164. At this point DM took the decision to align with the general community interpretation of Section 7.1 (which up to that point was not known to us). It was not a bug originally, our platform was compliant at the time it was introduced. The update to the BRs appears to have lost some specificity in its re-wording which existing CAs knew about at the time, but any new CAs coming after the change may not make the same conclusion without the benefit of the background information. All public trust TLS certs issued by DM have now been replaced and the platform updated as documented in https://bugzilla.mozilla.org/show_bug.cgi?id=1531800
NOTE: The Roots and Issuing CAs have not yet been re-issued at this time, awaiting availability of WT Audit staff. There is also an open question on MDSP group about the necessity to re-issue the Roots.

It’s evident from the above that DM has expeditiously addressed any issues raised and Mozilla has already said that there is no direct evidence of misissuance by DarkMatter. We have adhered to the rules and satisfied all technical criteria to be included. And we operate entirely transparently, providing all our public trust TLS certificates to CT logs so that anyone can see all the certificates that were issued over time and in this way, DarkMatter goes beyond required standards.

2. Risk of Misuse

I also find it extremely troubling that speculation by members on this list has far exceeded the bounds of reality.

Claim: DM certifications could be issued to malicious actors who could then impersonate legitimate websites.

Response:
• DM does not issue certificates that can be used for MITM as stated in our published policies which we are audited against.

• There is no test that we’re aware of that allows us to ascertain someone’s malicious intent in the future and if we find any misuse of a certificate in this regards we revoke it immediately. We are willing to engage in a dialogue with the industry on how to minimize the probability of malicious actors and willingly subscribe to the industry consensus on this.

Referenced claim: outlandish claims of DM being able to decrypt all its customers’ data if granted Root acceptance in Mozilla.

Response:
• As everybody here knows, this is entirely ludicrous as commercial CAs never hold customer’s private keys and therefore are incapable of doing this.

I’m disappointed that when I started this journey for inclusion of our Roots, that the process was clearly defined in terms of compliance and controls, but the process now appears to have devolved into dealing with the sort of wild speculation we are now experiencing.


Regards,

--

Scott Rea

Alex Gaynor

unread,
Mar 5, 2019, 9:17:01 AM3/5/19
to Scott Rea, mozilla-dev-security-policy
You're right, there is no test. That's why some of us believe we should
look at proxies: such as honesty, considering root membership is ultimately
about trust. DM has made claims that I am unable to understand in any way
besides lying.


> Referenced claim: outlandish claims of DM being able to decrypt all its
> customers’ data if granted Root acceptance in Mozilla.
>
> Response:
> • As everybody here knows, this is entirely ludicrous as commercial
> CAs never hold customer’s private keys and therefore are incapable of doing
> this.
>
>
As you are well aware, there is a neighboring claim that _is_ accurate.
Which is that a malicious root CA would be able to issue for any domain,
and thus issue certificates to enable MITM. While it is misleading to say
that DM would be able to decrypt all customer data, it's completely true
that DM would be able to MITM _any_ TLS traffic -- customer or not!


> I’m disappointed that when I started this journey for inclusion of our
> Roots, that the process was clearly defined in terms of compliance and
> controls, but the process now appears to have devolved into dealing with
> the sort of wild speculation we are now experiencing.
>
>
Do you believe there is _any_ outside activity a CA could engage in, while
still maintaing clean audits, that should disqualify them for membership in
the Mozilla Root Program?


>
> Regards,
>
> --
>
> Scott Rea
>
>
>
>
>
>
> Scott Rea | Senior Vice President - Trust Services
> Tel: +971 2 417 1417 | Mob: +971 52 847 5093
> Scot...@darkmatter.ae
>
> The information transmitted, including attachments, is intended only for
> the person(s) or entity to which it is addressed and may contain
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon
> this information by persons or entities other than the intended recipient
> is prohibited. If you received this in error, please contact the sender and
> destroy any copies of this information.
>
>
>
>
>
>
>
>
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

Alex

Benjamin Gabriel

unread,
Mar 5, 2019, 10:07:51 AM3/5/19
to Benjamin Gabriel, Scott Rea, mozilla-dev-security-policy
Message body (1 of 2)

Mozilla CA Certificate Policy Module Owner

Dear Wayne,

I am writing to provide an official response to the public discussion that you have initiated, on mozilla.dev.security.policy, in accordance with Article 7,1 of the Mozilla Root Store Policy, on the inclusion of DarkMatter certificates in the Mozilla Root Certificate Store.

While we welcome the public discussion as a vital component in the maintenance of trust and transparency in Mozilla’s Root Store, we wish to bring to your attention, and to other esteemed CABForum members, DarkMatter’s reasonable apprehension of bias and conflict of interest in how the Mozilla organization has framed and conducted the discussion at hand. Notwithstanding the stated goal of transparency in the public discussion, recent public comments by Mozilla employees (including your opening statement in the discussion), indicate a hidden organizational animus that is fatal to the idea of “due process” and “fundamental fairness” being accorded to any CA applicant to the Mozilla Root Store.

As you are fully aware, DarkMatter has spent considerable effort over the past three (3) years to establish its commercial CA and Trust related business. A key milestone has been the successful completion of two (2) Web Trust public audits verifying that DarkMatter’s CA business is operating in accordance with the standards stipulated within Mozilla Root Store Policy and the latest version of the CA/Browser Forum (“CABForum”) Requirements for the Issuance and Management of Publicly-Trusted Certificates. We have publicly disclosed our Certificate Policy and Certification Practice Statements showing how we comply with the above noted requirements.

A key pillar of the Mozilla Manifesto is the “commitment to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts” and a rejection of the use of the power of the internet to “intentionally manipulate fact and reality”.[1] Notwithstanding the call for a public discussion, we note that other senior members of your organization have already pre-judged in public, DarkMatter’s ability to be “trusted” on the basis of less than reasoned arguments and verifiable facts.

Marshal Erwin, director of trust and security for Mozilla, said the Reuters Jan. 30 report had raised concerns inside the company that DarkMatter might use Mozilla’s certification authority for “offensive cybersecurity purposes rather than the intended purpose of creating a more secure, trusted web.”

“We don’t currently have technical evidence of misuse (by DarkMatter) but the reporting is strong evidence that misuse is likely to occur in the future if it hasn’t already,” said Selena Deckelmann, a senior director of engineering for Mozilla.”

Every CA, Root CA, National PKI operators, Governmental Regulatory bodies (in every country of the world) should be as alarmed as we are at the dystopian vision articulated by the Mozilla employees for those sovereign nations deemed not worthy of operating their own national certificates. The above comments indicate an approach that is contrary to the stated commitment of the Mozilla foundation to an “Internet that includes all the peoples of the earth – where a person demographic characteristics do not determine their online access, opportunities, or quality of experience”. It should be disturbing to the entire CABForum community that Mozilla is contemplating to exercise its discretionary power in a capricious manner – against a company headquartered in the United Arab Emirates – simply on the basis of non-existent “evidence” of a future unknown “misuse”.

There simply cannot be “trust” in the discretionary power of a root store operator (whether it is Mozilla or Google), if its decision are based on something less than “verifiable facts”.

In light of the above comments, we ask you, as the Mozilla CA Certificate Policy Module Owner, to further reconsider how you have framed the public discussion on DarkMatter’s inclusion request - with the following statement:

“The rationale for distrust is that multiple sources [1][4][5] have provided credible evidence that spying activities, including the use of sophisticated targeted surveillance tools, are a key component of DarkMatter’s business, and such an organization cannot and should not be trusted by Mozilla. In the past Mozilla has taken action against CA’s found to have issued MitM certificates [6][7]. We are not aware of direct evidence of missued certificates in this case. However, the evidence does strongly suggest that misuse is likely to occur, if it has not already.”

There is no doubt in our mind that Mozilla’s inclusion of the references to CA’s found to have issued “MitM Certificates” in the opening statement about the “rationale for distrust” of DarkMatter is extremely prejudicial in that it deliberately distorts the discussion and misinforms the public about DarkMatter’s inclusion request. Furthermore, it calls into question the unexpressed motives behind the concerted efforts by certain competitors to derail DarkMatter’s two-year process to have our Roots included in the Mozilla and Google root stores. For the record, we unequivocally state that DarkMatter has never been involved in the issuance of any “MitM Certificates”, and will never do so.

[CONTINUED IN MESSAGE BODY 2]


Benjamin Gabriel | General Counsel & SVP Legal
Tel: +971 2 417 1417 | Mob: +971 55 260 7410
Benjamin...@darkmatter.ae

Benjamin Gabriel

unread,
Mar 5, 2019, 10:11:52 AM3/5/19
to Scott Rea, Benjamin Gabriel, mozilla-dev-security-policy
Message Body (2 of 2)
[... continued ..]

Dear Wayne

Furthermore, it is unfortunate that Mozilla have chosen to reference categorically misleading articles (and which continue to be recycled on slow-news days, on an annual basis since 2016) to support the allegation of “credible evidence”, without sharing the verifiable facts upon which Mozilla have come to this conclusion. While we do not wish to prejudice our ongoing efforts to vigorously address defamatory statements through the appropriate legal channels, in the spirit of the transparency, we will touch on each of the referenced articles below:

•The Reuters and the 2016 Intercept article have been cited as “credible evidence”. They discuss allegations, events, and people that pre-date DarkMatter’s existence, and where DarkMatter is referenced it is by way of anecdotal references to false, defamatory, and unsubstantiated statements by parties who are either anonymous or peddling a hidden agenda of their own with respect to the United Arab Emirates.

•Our purpose and vision are very clear, and it is publicly communicated: DarkMatter exists to enable business and governments to become smart, safe and cyber-resilient. We simply cannot comment on the allegations in the Reuters and the Intercept reports as they are about activities that we do not do, nor can we comment on facts that we are not knowledgeable about, the practices of government entities, individuals and other companies mentioned in these articles who are not associated with us.

•Mozilla have further cited a categorically false and blatantly defamatory posting by one Simone Margaritelli as a “credible reference.”[2] Again we remind you of the commitment by the Mozilla foundation for decision making using “verifiable facts”.

•Since we are alleged to have interviewed said individual and provided a job offer, it almost beggars belief that to-date no one has provided any evidence of such communications or participation in interviews by DarkMatter with such individual (whether abroad or in the United Arab Emirates). Such evidence does not exist – because (1) DarkMatter has never extended an offer in any capacity to such individual; (2) the persons mentioned as having granted an interview to such individual have never been employed by DarkMatter; (3) DarkMatter does not have an office in the claimed interview location. These are clear examples of categorical falsehoods – and are not “verifiable facts” upon which Mozilla can support its pre-judgment of DarkMatter.

We are of the view that a fair-minded and objective observer would reasonably conclude that the public pre-judgment by Mozilla employees inclusion of the above noted allegations, especially the innuendo of the “MitM Certificates” is fatal to the idea of “due process” and “fundamental fairness” being accorded to any CA applicant to the Mozilla Root Store.

In conclusion, we wish to reiterate our continued commitment to a transparent and auditable trust business. We will continue to operate our CA business in strict adherence and compliance with both the letter and spirit of relevant national laws without exception. We look forward to meeting with you, and to other CABForum members, at the CABF F2F in Cupertino, and further answer any questions that you may have with regard to this matter.

Yours sincerely,

Benjamin Gabriel
General Counsel
DarkMatter Group

lmel...@gmail.com

unread,
Mar 5, 2019, 10:45:14 AM3/5/19
to mozilla-dev-s...@lists.mozilla.org
I am a non technical person by far and read most of this article. What I am wondering, is why is there no public CA authority independent of nations elected by nations such as NATO but global?

Jonathan Rudenberg

unread,
Mar 5, 2019, 11:12:27 AM3/5/19
to dev-secur...@lists.mozilla.org
Hi Scott,

On Tue, Mar 5, 2019, at 09:02, Scott Rea via dev-security-policy wrote:
>
> • DM has resolved all technical and policy issues raised in the UAE and
> DM Roots submission process on Mozilla list: see
> https://bugzilla.mozilla.org/show_bug.cgi?id=1427262
>
> • Since the inception of the DM CA we have had two technical compliance
> issues during its three years of operation, both were addressed
> immediately and resolved.

Your count is incorrect. This certificate was misissued and appears to be a third incident that is not mentioned in your summary: https://crt.sh/?id=271084003&opt=zlint

> • The first was that DM misissued 2 TLS certificates that were not in
> compliance with the BRs as reported by Rob Stradling - specifically the
> FQDN listed in the CN was not also included in the SAN. The 2 offensive
> certs were already flagged by DM and were held and revoked and were
> only in existence for approximately 18hrs and at NO TIME were they LIVE
> on the internet protecting any services. They were promptly replaced by
> properly attributed BR-compliant certificates. Comment 31 of the UAE
> and DM Root inclusion Bug is where the two misissued certs were
> documented see https://bugzilla.mozilla.org/show_bug.cgi?id=1427262

Since we are summarizing, it's worth noting again that no incident report was provided, one was requested in this comment: https://bugzilla.mozilla.org/show_bug.cgi?id=1427262#c32

Jonathan

Matthew Hardeman

unread,
Mar 5, 2019, 12:11:14 PM3/5/19
to Alex Gaynor, Scott Rea, mozilla-dev-security-policy
On Tue, Mar 5, 2019 at 8:16 AM Alex Gaynor via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

>
> You're right, there is no test. That's why some of us believe we should
> look at proxies: such as honesty, considering root membership is ultimately
> about trust. DM has made claims that I am unable to understand in any way
> besides lying.
>
>
Unless the lies are material and relate to their CA operations, I don't
think it's relevant. One has to approach these stories with skepticism.
Bloomberg is regarded as reputable, but look at the SuperMicro case. If
there are provable commissions of dishonest behavior material to the
operations of the CA, I would think these would have been offered up by now.


> As you are well aware, there is a neighboring claim that _is_ accurate.
> Which is that a malicious root CA would be able to issue for any domain,
> and thus issue certificates to enable MITM. While it is misleading to say
> that DM would be able to decrypt all customer data, it's completely true
> that DM would be able to MITM _any_ TLS traffic -- customer or not!
>
>
And yet many tiny CAs exist, and if we look at the economics of CAs today,
some of them must be struggling. If this were their [DarkMatter's] intent,
rather than establishing a long term service, wouldn't they just buy up one
of those and delay the disclosure? If we're assuming that their nefarious
presumptive interception demanding client is the national government of the
UAE, it's clear that there's plenty of cash to do just that. With that
kind of money, you don't really even need to buy up a tiny CA. You could
likely just purchase the very integrity of the operators of one.


> Do you believe there is _any_ outside activity a CA could engage in, while
> still maintaing clean audits, that should disqualify them for membership in
> the Mozilla Root Program?
>

Personally, I think the value of the audits is rather limited, but it does
catch some things and remains a good safety. Certificate Transparency has
done a great deal to improve this space and is, going forward, an even more
valuable check on corruption.

Objections to DarkMatter on the sole basis of the actions of a sibling
business with common owners is dangerous turf to get into, if we care about
historic precedent. Not only for corporate MITM but for straight-up
malware as well. Until quite recently the operation presently called
Sectigo was called Comodo and for a not brief period was owned by Francisco
Partners, an organization which also owns/owned the NSO Group.
Additionally, and before Symantec would ultimately be untrusted for
entirely unrelated reasons, Symantec owned BlueCoat.

This means there are two recent precedents for which this category of
issues has not resulted in delegation of trust and one proposal that the
same category of behaviors should. I am not suggesting that a position
against DarkMatter on this basis is an indicator of xenophobia or bias
against a particular national affiliation, but I do wonder how one would
defend against such an accusation.

Matthew Hardeman

unread,
Mar 5, 2019, 12:16:05 PM3/5/19
to Alex Gaynor, Scott Rea, mozilla-dev-security-policy
On Tue, Mar 5, 2019 at 11:10 AM Matthew Hardeman <mhar...@gmail.com>
wrote:

>
> This means there are two recent precedents for which this category of
> issues has not resulted in delegation of trust and one proposal that the
> same category of behaviors should. I am not suggesting that a position
> against DarkMatter on this basis is an indicator of xenophobia or bias
> against a particular national affiliation, but I do wonder how one would
> defend against such an accusation.
>

Whoops. What I meant to say is "has not resulted in revocation of the
delegation of trust".

Ryan Sleevi

unread,
Mar 5, 2019, 1:18:39 PM3/5/19
to Matthew Hardeman, Alex Gaynor, mozilla-dev-security-policy
On Tue, Mar 5, 2019 at 12:11 PM Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Objections to DarkMatter on the sole basis of the actions of a sibling
> business with common owners is dangerous turf to get into, if we care about
> historic precedent. Not only for corporate MITM but for straight-up
> malware as well. Until quite recently the operation presently called
> Sectigo was called Comodo and for a not brief period was owned by Francisco
> Partners, an organization which also owns/owned the NSO Group.
> Additionally, and before Symantec would ultimately be untrusted for
> entirely unrelated reasons, Symantec owned BlueCoat.
>
> This means there are two recent precedents for which this category of
> issues has not resulted in delegation of trust and one proposal that the
> same category of behaviors should. I am not suggesting that a position
> against DarkMatter on this basis is an indicator of xenophobia or bias
> against a particular national affiliation, but I do wonder how one would
> defend against such an accusation.
>

I believe you may have misunderstood the details of these incidents and
their relationship to what's currently under discussion.

In the Sectigo + NSO Group, these were entities that shared common
investment ownership, but otherwise operated as distinct business entities.
In the Symantec + BlueCoat, these were integrated organizations - and the
concern was raised about ensuring that the entity BlueCoat did not have
access to the key material operated by the Symantec entity. In this case,
the Symantec entity asserted that the keys, operations, and audits were
under its scope - BlueCoat was prevented from having access or control. [1]

Both of those cases acknowledged a potential of conflicting interests, and
worked to distinguish how those conflicting interests would not conflict
with the community needs or goals.

By comparison, the discussion around DarkMatter has been more similar to
the discussion of Symantec rather than Sectigo, except DarkMatter has
issued carefully worded statements that may, to some, appear to be denials,
while to others, suggest rather large interpretative loopholes. This,
combined with the interpretative issues that have been shown throughout the
inclusion process - for which the serial numbers are merely the most recent
incident, but by no means the first, raises concerns that there may be
interpretative differences in the nature of the statements provided or the
proposed guarantees. This seems like a reasonable basis of concern. Recall
when TrustWave provided a similar creative interpretation regarding a MITM
certificate it issued for purposes of "local" traffic inspection [2][3],
attempting to claim it was not a BR violation. Or recall that Symantec made
similar claims that the 30,000+ certificates that it could not demonstrate
adhered to the BRs were somehow, nevertheless, not "misissued" [4] - as if
the point of concern was the semantic statement of misissuance, rather than
the systemic failure of the controls and the resulting lack of assurance.

In this regard, there is at least precedent that such interpretative
differences do not bode well.

Going back to past policy discussions provided [5], there seemed clear
consensus that there are business operations that are fundamentally
incompatible with being a trusted CA, without ascribing value judgements to
those operations or practices. The question is not one of bias or
xenophobia against a particular country or national affiliation - it's
whether or not the interests and operations of the business are
incompatible with a trusted CA. To date, we have reports of certain
business activities that are unquestionably incompatible with being a
trusted CA - and the only question is whether or not there is accuracy in
those reports of activities. We have a carefully worded statement that can
both indicate those activities happen (under a different semantic
interpretation - such as we saw with "misissuance" by Symantec or
"Enterprise management" by Trustwave or "entropy" by DarkMatter) or could
be seen as denials.

The process to date is indeed to evaluate whether or not there is a
conflict of interest that would disqualify an entity from being a CA, and
there's ample past precedent about that being a relevant factor for
discussion.

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/akOzSAMLf_k/Y1D4-RXoAwAJ
[2]
https://groups.google.com/d/msg/mozilla.dev.security.policy/ehwhvERfjLk/XyHxrYkxdnsJ
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=724929
[4]
https://community.digicert.com/en/blogs.entry.html/2017/03/24/symantec-backs-its-ca.html
[5]
https://groups.google.com/d/msg/mozilla.dev.security.policy/PFDMOUL_Xkc/HviuS-rx_gcJ

Matthew Hardeman

unread,
Mar 5, 2019, 1:59:03 PM3/5/19
to Ryan Sleevi, Alex Gaynor, mozilla-dev-security-policy
I do acknowledge the difference here, and I appreciate your bringing this
particular concern to my attention. As always, your depth of knowledge and
experience in the evolution of this area is astounding.

I suppose my initial response to the concern as presented is that it would
seem to be a fairly trivial (just paperwork, really) matter for DarkMatter
(or indeed any other applicant) to separate the CA into a fully separate
legal entity with common ownership interest with any other business they
may currently have going on. I put forth the question as to whether or not
the assurances you reference and the legal structuring you note are an
actual, effective risk mitigation.

I see two elements in this which might be said to be the real underlying
risk mitigation:

1. The legal structure and common ownership is truly the safety
mechanism. I find this...tenuous. I'm not sure any piece of paper ever
really kept a bad actor from acting bad. This seems very much like "Meet
the new boss [who's wholly owned by the old boss], same as the old boss."
In essence, I think if the matter on which the trust hangs is slightly
different nuances to first party assertions, that this is so thin and
consequence free in the violation that I regard it as not really material.

2. Maybe the real risk mitigation is self-interested asset appreciation /
asset protection. What I mean by this is that quite simply the ownership
of a hypothetical CA and a hypothetical "bad business" -- however we define
it but construed such that the "bad business" has an apparent conflict in
that they'd like to abuse their owner's CA's trust -- will act to defend
their business interest (in this case the value of each of the business
segments) by preventing one of their business segments from destroying the
continued value of the other segment. (We can agree, I believe, that a CA
that no one trusts has essentially no value.)

It's pretty clear that I put more faith in a business' "greedy"
self-interest than I do in legal entity paperwork games. Which, I believe,
raises an intriguing concept. What if the true test of a CA's
trustworthiness is, in fact, a mutually understandable apparent value build
/ value preservation on a standalone basis of the asset that is the CA? In
other words, maybe we can only trust a CA whose value proposition to the
ownership we can reasonably understand from the perspective of the
ownership, if we limit that value only to the value that can be derived in
a fully-legitimate use of the CA determined as a going value of the CA from
a fully standalone basis in addition to the value of the CA in the overall
scope of the larger business, constrained to only the legitimate synergies
that may arise.

>

Jakob Bohm

unread,
Mar 5, 2019, 2:03:17 PM3/5/19
to mozilla-dev-s...@lists.mozilla.org
On 05/03/2019 16:11, Benjamin Gabriel wrote:
> Message Body (2 of 2)
> [... continued ..]
>
> Dear Wayne
>
> ...
>
> Yours sincerely,
>
> Benjamin Gabriel
> General Counsel
> DarkMatter Group
>
>
>

As an outside member of this community (not employed by Mozilla or any
public CA), I would like to state the following (which is not official
by my company affiliation and cannot possibly be official on behalf of
Mozilla):

1. First of all, thank you for finally directly posting Darkmatter's
response to the public allegations. Many people seemingly
inexperienced in security and policy have posted rumors and opinions
to the discussion, while others have mentioned that Darkmatter had
made some kind of response outside this public discussion thread.

2. It is the nature of every government CA or government sponsored CA
in the world that the current structure of the Mozilla program can
be easily abused in the manner speculated, and that any such abuse
would not be admitted, but rather hidden with the full skill and
efficiency of whatever spy agency orders such abuse. One of the
most notorious such cases occurred when a private company running a
CA for the Dutch Government had a massive security failure, allowing
someone to obtain MitM certificates for use against certain Iranian
people.
I have previously proposed a technical countermeasure to limit this
risk, but it didn't seem popular.

3. The chosen name of your CA "Dark matter" unfortunately is
associated in most English language contexts with either an obscure
astronomical phenomenon or as a general term for any sinister and
evil topic. This unfortunate choice of name may have helped spread
and enhance the public rumor that you are an organization similar
to the US NSA or its contractors. After all, "Dark matter is evil"
is a headline more likely to sell newspapers than "UAE company with a
very boring name helped UAE government attack someone". However I
understand that as a long established company, it is probably too late
to change your name.

4. The United States itself has been operating a government CA (The
federal bridge CA) for a long time, and Mozilla doesn't trust it.
In fact when it was discovered that Symantec had used one of their
Mozilla trusted CAs to sign the US federal bridge CA, this was one
of the many problems that lead to Mozilla distrusting Symantec,
even though they were the oldest and biggest CA in the root
program.

5. While Darkmatter LLC itself may have been unaware of the discussions
that lead to the wording of the 64 bit serial entropy requirements,
it remains open how QuoVadis was not aware of that discussion and
did not independently discover that you were issuing with only 63
bits under their authority.

6. Fairly recently, a private Chinese CA (WoSign) posted many partially
untrue statements in their defense. The fact that their posts were
not 100% true, and sometimes very untrue, has lead to a situation
where some on this list routinely compare any misstatements by a
criticized CA to the consistent dishonesty that lead to the permanent
distrust of WoSign and it's subsidiary StartCom. This means that you
should be extra careful not to misstate details like the ones caught
by Jonathan Rudenberg in hist response at 16:12 UTC today.

7. Your very public statement ended with a standard text claiming that
the message should be treated as confidential. This is a common
cause of ridicule and the reason that my own postings in this forum
use a completely different such text than my private e-mail
communication. As a lawyer you should be able to draft such a text
much better than my own feeble attempt.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Ryan Sleevi

unread,
Mar 5, 2019, 4:16:26 PM3/5/19
to Matthew Hardeman, Ryan Sleevi, Alex Gaynor, mozilla-dev-security-policy
On Tue, Mar 5, 2019 at 1:58 PM Matthew Hardeman <mhar...@gmail.com> wrote:

> I suppose my initial response to the concern as presented is that it would
> seem to be a fairly trivial (just paperwork, really) matter for DarkMatter
> (or indeed any other applicant) to separate the CA into a fully separate
> legal entity with common ownership interest with any other business they
> may currently have going on. I put forth the question as to whether or not
> the assurances you reference and the legal structuring you note are an
> actual, effective risk mitigation.
>

Oh, I don't think the corporate structure is really at all an effective
mitigation, certainly not in isolation. That's part of why I tried to
highlight the organization's response itself, rather than fixating on the
structure.

To briefly revisit what I highlighted in [1], in which the process of root
inclusion has, from its inception, involved elements of subjectivity that
are inextrictably linked with some of the objective process. This includes
the subjectivity of audits themselves - which, while the various auditing
professional standards try to minimize. While I admit my lack of deep
familiarity with the ISAE approach to professionally regulating and
addressing that subjectivity, or that of the Netherlands, where this
particular auditor is based, versus those of say AICPA, the general intent
of such professional standards are to minimize that subjectivity to avoid
bringing the profession into disrepute. However, beyond that audit
subjectivity, root inclusion or removal requests have long considered
factors one might argue are subjective due to the lack of prescriptive
guidance - such as how a CA files incident reports and how "timely" and
"thorough" they are perceived in their responsiveness.

With this context and framing, it hopefully explains why how the
organization responds, whether it be through inclusion request or incident
report, is an essential part in how one evaluates CAs' for continued trust.
It may be that restructuring organizationally, as a response, can
ameliorate concerns or participate in addressing them, but I don't think
the essential organizational separation alone guarantees. To put to an
extreme, even entirely distinct organizational entities, whose relationship
is only through business relationships, if such a relationship creates risk
or the appearance of risk, it may be grounds to prevent or remove trust.
Indeed, I think your #2 approaches how some may be viewing this. There has
certainly been discussion in the context of government CAs about what their
'incentives' are perceived to be - and how government CAs, without the
financial incentive structure that might discourage misbehaviour, are
generally seen as 'less' trustworthy precisely because they have less to
lose and, at an extreme, the threat of force (as some theories of
governance go, whether it be through judicial or extrajudicial means) to
enact their will. That is, of course, a whole discussion in itself - but it
is not at all dissimilar to parallel conversations that happen in the space
of unwanted software (UwS), spam, and phishing - which have a host of
economic concerns just as much as technical.

>From that longer message in [1], I omitted a much longer discussion about
the goals of a root program, as I fear the discussion would become too
mired in confusion until this particular issue has reached a conclusion. I
will, however, highlight that Frank's metapolicy discussion in [2]. At the
time, there were many existing and disparate PKIs, which provided
everything from 'simple' domain authentication to more complex forms of
vetting or restricted to particular communities. In some ways, the policy
(as captured in #10 of the metapolicy) was about evaluating and comparing
these existing PKIs and whether they helped improve the security of Mozilla
users (again, #10). You can see that this view is reflected in the
contemporary work at the time - such as the PKI Assessment Guidelines [3],
which heavily influenced the audit regime and principles, or RFC 3647 [4],
as well as the discussion about the policy itself [5]. These views
basically said that there is a "Mozilla PKI", and there are existing PKIs
of a variety of types and assurances, and the goal of the policy is
effectively determining whether to enter a form of business relationship
with these PKIs and assert that their operations are functionally
equivalent to the expectations of the Mozilla community. Everything that
derived from that - audits, public review, the policy itself - was about
evaluating whether or not to enter in a business relationship with these
third-parties, with Mozilla attempting to represent as best possible its
users.

While I have much more to say on that for a later time, I draw attention to
those particular bits from my earlier message, because as with any
relationship that affects the security of users and products - whether it
be new Web Platform features, evaluating reports of Firefox security issues
(in code or extensions), or in this case, with a CA - Mozilla is behaving
as the user's agent (hence "User Agent"). The process of determining
whether or not to trust a CA is, in part, attempting to evaluate whether
the CA in question has goals, incentives, and structures that are aligned
with the "typical" Mozilla user's needs ([2] #6 and #7), such that they can
be entrusted by Mozilla, and its users, to promote and encourage the
adoption of TLS ([1], #10).

You can see similarity in how Mozilla evaluates plugins and prevents them
from working in Firefox due to security concerns [6], or prohibit
third-party software that might harm users [7][8][9]. Some users may
legitimate want this software or functionality, but it may be removed or
blocked if it might be abusive to the typical user [10]. Similarly, Mozilla
has taken steps to remove entire classes of functionality [11] in the name
of improving user security.

I mention this to highlight how the discussion that happens here is
intentionally similar ([2] #19) to how Mozilla treats other aspects of
behaving as User's Agent, and how every CA that it adds is functionally
being treated as an extension of Mozilla and similarly expected to behave
as agents of users' (in this case, all Firefox users'). This evaluation
necessarily looks at a wide variety of things, some of them 'soft' and
'subjective', such as financial incentives and motivations, and aren't
easily compiled onto purely mechanical checklists - whether they be
extension policies, third-party software injecting into Firefox, plugins,
or CAs. As much as possible, transparency is desired - but, much like
extensions blocked for abusiveness, absolute transparency or absolute
objectivity is not always possible nor in the best interests of typical
users. Discussions such as this try to capture the evaluation and thought
processes that inform the module owners and peers in making a decision, and
to balance as wide as possible the feedback and input about how these
decisions further both Mozilla's mission and the needs of its 'typical'
users.

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/rNWEMEkUAQAJ

[2] http://hecker.org/mozilla/ca-certificate-metapolicy
[3]
https://www.americanbar.org/content/dam/aba/events/science_technology/2013/pki_guidelines.pdf
[4] https://tools.ietf.org/html/rfc3647
[5] http://hecker.org/mozilla/cert-policy-draft-10
[6] https://support.mozilla.org/en-US/kb/flash-blocklists
[7]
https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/AMO/Policy/Reviews
[8] https://wiki.mozilla.org/Security/Safe_Browsing
[9]
https://support.mozilla.org/en-US/products/firefox/fix-problems/problems-add-ons-plugins-or-unwanted-software
[10]
https://www.bleepingcomputer.com/news/software/firefox-gets-privacy-boost-by-disabling-proximity-and-ambient-light-sensor-apis/
[11]
https://developer.mozilla.org/en-US/docs/Archive/Add-ons/Working_with_multiprocess_Firefox

Selena Deckelmann

unread,
Mar 5, 2019, 5:58:19 PM3/5/19
to mozilla-dev-s...@lists.mozilla.org
Hi!

Just wanted to briefly comment in response to Benjamin Gabriel's statement.

On Tuesday, March 5, 2019 at 7:07:51 AM UTC-8, Benjamin Gabriel wrote:

> Marshal Erwin, director of trust and security for Mozilla, said the Reuters Jan. 30 report had raised concerns inside the company that DarkMatter might use Mozilla’s certification authority for “offensive cybersecurity purposes rather than the intended purpose of creating a more secure, trusted web.”
>
> “We don’t currently have technical evidence of misuse (by DarkMatter) but the reporting is strong evidence that misuse is likely to occur in the future if it hasn’t already,” said Selena Deckelmann, a senior director of engineering for Mozilla.”
>

I think what you've quoted are accurate statements. That is, recent articles raised questions that I, and others, felt were important to bring to this public forum to discuss.

For that purpose, in the interest of a full public and transparent discussion of this trust decision, I appreciate DarkMatter engaging in this forum.

Wayne recently posted about our reasons for maintaining our own CA root program [1] and quoted the Mozilla Manifesto which states that "Individuals' security and privacy on the internet are fundamental and must not be treated as optional." He also stated the benefits of our process, where "we give individuals a voice in these trust decisions."

Thank you also to all the thoughtful contributors to this discussion, in particular this detailed analysis from Ryan Sleevi [2].

We make good on our commitments in the Manifesto when we bring these challenging discussions into the open.

-selena

[1] https://blog.mozilla.org/security/2019/02/14/why-does-mozilla-maintain-our-own-root-certificate-store/
[2] https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/rNWEMEkUAQAJ

andrewtipt...@gmail.com

unread,
Mar 5, 2019, 9:32:03 PM3/5/19
to mozilla-dev-s...@lists.mozilla.org
On Friday, February 22, 2019 at 2:21:24 PM UTC-7, Wayne Thayer wrote:
> The recent Reuters report on DarkMatter [1] has prompted numerous questions
> about their root inclusion request [2]. The questions that are being raised
> are equally applicable to their current status as a subordinate CA under
> QuoVadis (recently acquired by DigiCert [3]), so it seems appropriate to
> open up a discussion now. The purpose of this discussion is to determine if
> Mozilla should distrust DarkMatter by adding their intermediate CA
> certificates that were signed by QuoVadis to OneCRL, and in turn deny the
> pending root inclusion request.
>
> The rationale for distrust is that multiple sources [1][4][5] have provided
> credible evidence that spying activities, including use of sophisticated
> targeted surveillance tools, are a key component of DarkMatter’s business,
> and such an organization cannot and should not be trusted by Mozilla. In
> the past Mozilla has taken action against CAs found to have issued MitM
> certificates [6][7]. We are not aware of direct evidence of misused
> certificates in this case. However, the evidence does strongly suggest that
> misuse is likely to occur, if it has not already.
>
> Mozilla’s Root Store Policy [8] grants us the discretion to take actions
> based on the risk to people who use our products. Despite the lack of
> direct evidence of misissuance by DarkMatter, this may be a time when we
> should use our discretion to act in the interest of individuals who rely on
> our root store.
>
> I would greatly appreciate everyone's constructive input on this issue.
>
> - Wayne
>
> [1] https://www.reuters.com/investigates/special-report/usa-spying-raven/
>
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262
>
> [3]
> https://groups.google.com/d/msg/mozilla.dev.security.policy/hicp7AW8sLA/KUSn20MrDgAJ
>
> [4]
> https://www.evilsocket.net/2016/07/27/How-The-United-Arab-Emirates-Intelligence-Tried-to-Hire-me-to-Spy-on-its-People/
>
> [5]
> https://theintercept.com/2016/10/24/darkmatter-united-arab-emirates-spies-for-hire/
>
> [6]
> https://groups.google.com/d/msg/mozilla.dev.security.policy/czwlDNbwHXM/Fj-LUvhVQYEJ
>
> [7] https://bugzilla.mozilla.org/show_bug.cgi?id=1232689
> [8]
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/

find another way. do NOT ALLOW DARK MATTER. Seems pretty straightforward to me, as per EFF. What would be the rationale?

westm...@gmail.com

unread,
Mar 6, 2019, 12:35:07 AM3/6/19
to mozilla-dev-s...@lists.mozilla.org
It seems to me that the acceptance of this root can cause great damage to Mozilla to the future and cause great discussions in the Linux community. Is Mozilla ready to do all this and lose the support of a large number of users in the future? In my opinion these are the main issues.

Benjamin Gabriel

unread,
Mar 6, 2019, 9:24:46 AM3/6/19
to Selena Deckelmann, mozilla-dev-s...@lists.mozilla.org, Benjamin Gabriel
Dear Selena,

On Wednesday, 6 March 2019 02:58:19 UTC+4, Selena Deckelmann wrote:
>
> I think what you've quoted are accurate statements. That is, recent articles raised questions that I, and others, felt were important to bring to this public forum to discuss.
>

While we welcome and are fully aligned with a public and transparent discussion, we continue to call for Mozilla representatives to conduct their discretionary powers in accordance with the principles of due process and fundamental fairness. We are in agreement that Mozilla is making good on its commitment when it brings these challenging discussion, and the articles of concern, to this public forum for an independent and unbiased discussion. However, with due respect, we believe that it is extremely prejudicial and biased when Mozilla representatives provide follow-up interviews - to the same misleading article - in order to simply state that this originally disputed “reporting is strong evidence”. It is very simple to see why DarkMatter has reasonable grounds for an apprehension of hidden bias in the Mozilla fiduciaries.

> Wayne recently posted about our reasons for maintaining our own CA root program [1] and quoted the Mozilla Manifesto which states that "Individuals' security and privacy on the internet are fundamental and must not be treated as optional."

We agree with the Mozilla Manifesto unequivocally. Mozilla should note that a key reason why DarkMatter decided to launch a commercial CA business is because the citizens, residents and visitors to the United Arab Emirates currently do not have access to local providers who can provide them with the protections taken for granted in other parts of the world. We are fully committed to fundamental rights of the individual to security and privacy, and work diligently to advance those through all of our commercial efforts, services and products. While we are a young company, our commitment to security and privacy of the individual is a “verifiable fact” that should also be introduced into this public discussion. To secure and protect individuals who use mobile devices for communications, we have successfully launched KATIM® phone, a purpose built, mobile device based on four security pillars: hardened and tamper-resistant hardware, hardened OS with hardware-based crypto root of trust, KATIM™ secure communications suite and back-end infrastructure that, together form a unique ultra-secure system. [1]

Contrary to the misleading narratives and articles being peddled by parties with a hidden agenda, we are fully committed to a secure and safer internet for all individuals everywhere. You will note that this has already been formally communicated in a letter to Mozilla by our CEO, and further shared in this public discussion. A good example of this commitment is the work our security researchers do, each and every day, to identify and disclose malicious applications that attack the security and privacy of individuals everywhere. In May, 2018, we identified and informed Google of a malicious application available on the Google play store.[2] In late 2018, we further made a responsible disclosure to Apple of a significant attack that “bypasses all native macOS security measures”, and further presented the full findings at Hack In the Box conference in Singapore. [3] As you can see, our commitment to the digital security of all individuals, whether in the United Arab Emirates or anywhere else in the world, is fully evident in our work and services to date.

We are also extremely proud of all our colleagues in DarkMatter who continually affirm their commitment to security and privacy by the work they conduct on a daily basis. Our CA business unit, headed by Scott Rea, has worked diligently to meet every technical requirement for a CA, in accordance with the CABForum Baseline Requirements and EV Guidelines. This Mozilla inclusion public discussion has also allowed us to showcase our timely and expedient response when issues are identified. A good example is our lead, in how we responded in a timely manner to the concerns raised, by certain list members, with regard to entropy non-compliance of our serial numbers on the EJBCA platform. As a result, other CA’s are now alerted to the same issue that impact them – case in example being Google, who has subsequently declared their own entropy non-compliance and is now in the process of replacing and revoking certificates with 63 bit entropy serial numbers globally.[4]

Again, we look forward to meeting the Mozilla representatives, and other CABForum members, at the CABForum’s F2F, and following up on any further clarifications Mozilla may need for a more public and transparent discussion.

Benjamin Gabriel
General Counsel, DarkMatter Group.

[1] https://www.darkmatter.ae/KATIM/
[2] https://www.darkmatter.ae/blogs/darkmatter-identifies-app-stealing-personal-information/
[3] https://www.forbes.com/sites/thomasbrewster/2018/08/30/apple-mac-loophole-breached-in-middle-east-hacks/#7791c17a6fd6
[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1532842

na...@symbolic.software

unread,
Mar 6, 2019, 10:37:09 AM3/6/19
to mozilla-dev-s...@lists.mozilla.org
On Tuesday, March 5, 2019 at 7:18:39 PM UTC+1, Ryan Sleevi wrote:
> On Tue, Mar 5, 2019 at 12:11 PM Matthew Hardeman via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>
> By comparison, the discussion around DarkMatter has been more similar to
> the discussion of Symantec rather than Sectigo, except DarkMatter has
> issued carefully worded statements that may, to some, appear to be denials,
> while to others, suggest rather large interpretative loopholes. This,
> combined with the interpretative issues that have been shown throughout the
> inclusion process - for which the serial numbers are merely the most recent
> incident, but by no means the first, raises concerns that there may be
> interpretative differences in the nature of the statements provided or the
> proposed guarantees. This seems like a reasonable basis of concern. Recall
> when TrustWave provided a similar creative interpretation regarding a MITM
> certificate it issued for purposes of "local" traffic inspection [2][3],
> attempting to claim it was not a BR violation. Or recall that Symantec made
> similar claims that the 30,000+ certificates that it could not demonstrate
> adhered to the BRs were somehow, nevertheless, not "misissued" [4] - as if
> the point of concern was the semantic statement of misissuance, rather than
> the systemic failure of the controls and the resulting lack of assurance.
>
> In this regard, there is at least precedent that such interpretative
> differences do not bode well.

Perhaps it would be helpful for Mozilla to posit a set of unambiguous statements for which it would require DarkMatter to categorically and fully deny. The goal of doing so would be to quell any potential "interpretative loopholes" within DarkMatter's denials. That could be a way for moving parts of this discussion solidly forward.

Ryan Sleevi

unread,
Mar 6, 2019, 10:51:21 AM3/6/19
to Benjamin Gabriel, Selena Deckelmann, mozilla-dev-s...@lists.mozilla.org
(Writing in a personal capacity)

Benjamin,

I've focused only on the substantive new information added to this
discussion relevant to trust. I hope the past messages have highlighted why
much of the message may be fundamentally misunderstanding the purpose of a
root store and the root store inclusion process, there are two salient bits
from this message that are worth responding to.

On Wed, Mar 6, 2019 at 9:24 AM Benjamin Gabriel via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Mozilla should note that a key reason why DarkMatter decided to launch a
> commercial CA business is because the citizens, residents and visitors to
> the United Arab Emirates currently do not have access to local providers
> who can provide them with the protections taken for granted in other parts
> of the world.


As it relates to TLS certificates, which is the purpose of discussion for
this root inclusion, could you highlight or explain why "citizens,
residents, and visitors" do not have access to TLS certificates, or how
those protections offered by DarkMatter are somehow different?

I highlight this, because given the inherently global nature of the
Internet, there is no technical need to work with local CAs, and, with a
well-run root store, all CAs provide an equivalent level of protection and
security, which rests in the domain authorization. This is, of course,
comparable to the domain name system of gTLDs (rather than ccTLDs), which
is inherently global in nature. Additional services that a CA may provide
do not necessarily factor in to the decision making process, as the
question to answer is whether the risk of including a given CA, which will
have proverbial keys to the Internet, is outweighed by the new and unique
benefit it brings to the promotion and adoption of TLS for the "typical"
user.


> This Mozilla inclusion public discussion has also allowed us to showcase
> our timely and expedient response when issues are identified. A good
> example is our lead, in how we responded in a timely manner to the concerns
> raised, by certain list members, with regard to entropy non-compliance of
> our serial numbers on the EJBCA platform.
>

This does not seem to align with the objective facts.
1) To the best of my knowledge, DarkMatter has still not provided a timely
incident report, as others have pointed out [1], over 9 months after it was
requested [2].
2) DarkMatter response to the serial number issue has demonstrated that
DarkMatter did not do the expected due diligence to investigate and
understand the issue. As acknowledged by your colleague [3], DarkMatter
only began to respond to this incident once it was carefully explained to
them by members of this community why the interpretation was wrong.
However, the expectations of CAs, as have been consistently applied to all
CAs, is that the CA is themselves capable of performing such an analysis,
and their ability or inability to do so is a significant factor in
trustworthiness.
3) This incident fits within a larger pattern [4] of issues [5] in which
DarkMatter has demonstrated a lack of understanding of the core
expectations of Certificate Authorities.

As demonstrated in the past [6], CAs that have demonstrated patterns of
misunderstanding or disputing core requirements may not be in the best
interests of the security of Mozilla users. Given that organizations will
have the capability to intercept all Mozilla users' network traffic, by
virtue of directly misissuing, or may jeopardize the security of Mozilla
users, through improperly operating their CA in a way that can allow for
compromise or misuse [7], it seems that there is ample precedent.

Respectfully, the choice to include any new CA is a question about whether
or not this new CA will provide new value and benefit to typical Mozilla
users, commiserate with the risk. You have highlighted that you believe
such articles are misleading, but there are a number of unresponded
questions to past replies that seek to better understand. However, such a
reply also seems to overlook the fact that the discussion does not begin
with an assumption of "We should trust", and then seek to find reasons not
to, but rather, begins with an evaluation of "Should we trust", and seeks
to evaluate the benefits of that new addition versus the risks that each
new CA poses.

In line with the above, in which a pattern of risk has been identified
through how DarkMatter has handled the incidents and independent of the
nature and severity of the incident themselves, and the broader thread has
highlighted a risk of the loss of faith and trust in the Mozilla
Foundation's commitment to their users security, perhaps you can spend time
highlighting the benefit that trust in DarkMatter will provide the typical
Mozilla user, so that it can be evaluated as to whether that benefit is
greater than the risk.

I think it is worth noting that, rather than treating this as somehow a
response to a given geographic area, CAs which provide limited value to
small communities, but put at risk the broader community, have resulted in
discussions [8][9] that ultimately resulted in the CAs removal [10].

Understanding the material differences in these situations may go much
further in productive discussion, and hopefully demonstrate the consistency
of the long-held principles being applied to the present discussion.

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/vvVSHKakAgAJ
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262#c32
[3]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/HiNC3UeeAgAJ
[4] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262#c10
[5] https://bugzilla.mozilla.org/show_bug.cgi?id=1427262#c39
[6]
https://groups.google.com/d/msg/mozilla.dev.security.policy/lqZersN26VA/NVLf6YPWAAAJ
[7] https://wiki.mozilla.org/CA:Symantec_Issues
[8]
https://groups.google.com/d/msg/mozilla.dev.security.policy/NNV3zvX43vE/rae9kNkWAgAJ
[9]
https://groups.google.com/d/msg/mozilla.dev.security.policy/eOuTmwQKMSk/pyXvWHhmBQAJ
[10] https://bugzilla.mozilla.org/show_bug.cgi?id=1493822

Kathleen Wilson

unread,
Mar 6, 2019, 2:15:20 PM3/6/19
to mozilla-dev-s...@lists.mozilla.org
All,

Thank you to those of you that have been providing thoughtful and
constructive input into this discussion. I have been carefully reading
and contemplating all of the messages posted in the
mozilla.dev.security.policy forum.

As the owner of Mozilla’s CA Certificates Module[1] and in an effort to
respond to Matthew’s concerns about transparency[2], I would like to
share my current thoughts about DarkMatter’s intermediate certificates
and root inclusion request. I will make a decision after this discussion
has run its full course.

I appreciate that representatives of DarkMatter are participating in
this discussion, and reiterate that I have not yet come to a decision. I
would also like to remind everyone that we have not yet started the
public discussion phase of DarkMatter’s root inclusion request. This
discussion is separate from Mozilla’s root inclusion process, but will
determine if the process will continue for DarkMatter’s root inclusion
request. If this discussion concludes that DarkMatter’s intermediate
certificates should be added to OneCRL, then the root inclusion request
will be closed. However, if this discussion concludes that DarkMatter’s
intermediate certificates should not be added to OneCRL, then
DarkMatter’s root inclusion request will continue to follow the normal
process.

== Regarding DarkMatter’s current intermediate certificates ==

The current DarkMatter intermediate certificates are not constrained or
technically controlled by the parent CA, as was confirmed by a
representative of DigiCert[3]. This means that currently DarkMatter has
all of the certificate issuance capability of a root certificate that is
directly included in Mozilla’s root store. This is why we are having
this discussion to determine if DarkMatter’s current intermediate
certificates should be added to OneCRL.

In my opinion, there are other options for DarkMatter. For example, a CA
who is currently included in Mozilla’s program such as Digicert, could
issue DarkMatter new intermediate certificates that are owned and
controlled by DigiCert and for which DigiCert performs additional domain
validation before issuance of end-entity certs in that CA hierarchy. I
think that an option like this would provide sufficient oversight of
DarkMatter’s certificate issuance, if we decide to add DarkMatter’s
current intermediate certificates to OneCRL.

== Regarding DarkMatter’s root inclusion request ==

Since I began working on Mozilla’s CA Program in 2008 I have rarely seen
this much interest and opinions from the media and general public on
root inclusion requests, even though all of our process is performed in
the open[4] and includes a public discussion phase[5]. In my opinion,
we should pay attention to the messages we're receiving, and subject
this CA to additional scrutiny.

As others have already pointed out[6] DarkMatter’s root inclusion
request is reminiscent of CNNIC’s root inclusion request in 2009 [7] and
their request to include an additional root in 2012 [8]. As Ryan
reminded us[9] in his excellent analysis, the decisions about the
inclusion of the CNNIC root certificates was based on “a rigid
application of policy”. In one of my posts[10] about CNNIC’s root
inclusion requests I stated:
“There was a lot of discussion about government, politics, legal
jurisdiction, what-if scenarios, and people’s opinions about the Chinese
government. While I sympathize with people’s feelings about this,
Mozilla’s root program is based on policy and evidence. While CNNIC has
provided all of the required information to demonstrate their compliance
with Mozilla’s CA Certificate Policy, no usable evidence has been
provided to show non-compliance with Mozilla’s CA Certificate Policy.”

As we all know, in 2015 Mozilla revoked trust in CNNIC certificates[11]
after discussion[12] in this forum regarding the discovery that an
intermediate CA under the CNNIC root was used to mis-issue TLS
certificates for some domains, and subsequently used for MiTM. In that
case, rigid application of the policy left our users at risk. This was
an important learning experience for us.

Root inclusion requests rarely receive this much attention. Another one
that we have been reminded of is TeliaSonera’s root inclusion
discussion[13], in which I stated: “Typically this would have been
considered a very standard request, but this discussion turned into a
political sounding board. Approval of this root-renewal request means
that the CA complies with Mozilla’s CA Certificate Policy and provides
annual audit statements attesting to their compliance. It in no way
reflects my opinion, or that of Mozilla, on the actions of the owner of
the CA in regards to their non-CA related businesses and practices.”

Unlike CNNIC, TeliaSonera still has root certificates in Mozilla’s root
store. Similar to many CAs in our program, TeliaSonera has had some
compliance problems[14], but (to my knowledge) no evidence has been
provided of TeliaSonera knowingly issuing certificates without the
knowledge of the entities whose information is referenced in the
certificates, or knowingly issuing certificates that appear to be
intended for fraudulent use. TeliaSonera’s reported compliance problems
have not yet been deemed to be egregious enough to warrant removal of
their root certificates. Therefore, it is not as simple as saying that
this DarkMatter root inclusion request seems similar to the CNNIC
situation, so we should not approve DarkMatter’s root inclusion request.

However, I believe that the CNNIC experience is a valuable lesson that
should be taken into account when making a decision on DarkMatter.
During CNNIC’s root inclusion process, the community expressed grave
concerns about the company based on credible reports that they had been
involved in interception and surveillance of web traffic, including
providing malware products to others such as their government. Even with
these credible news reports, the community was unable to obtain
technical evidence of intentional certificate mis-issuance, so I
approved their root inclusion request. In essence this meant ignoring
the evidence that had been provided because I deemed that it was not
directly applicable to the policy requirements for being a CA in our
program. However, it wasn’t until much later that there was sufficient
evidence to remove the CNNIC’s root certificate. Therefore, we should
not ignore credible news reports regarding DarkMatter.

Matthew correctly stated[15] that he “can not recall use of subjective
discretion to deny admission to the program.” As demonstrated in both
the CNNIC and TeliaSonera requests I have always tried to be as
objective as possible in regards to root inclusion requests. However, as
Ryan pointed out[16] “the program is, and has always been, inherently
subjective and precisely designed to support discretionary decisions.”
And Wayne said[17]: “A stronger argument along these lines is that we
have plenty of CAs, so there is no good reason to take a risk on one
that we lack confidence in.” I do not believe that we should take a
certain action just because it is what we have always done. And we
should use all of the information that is available to us in analyzing
the risk that comes with including new root certificates, even if that
means the decision is more subjective than previous decisions. The
ultimate purpose of our transparency and our standards is to bolster
trust in our CA program. Ignoring information that doesn’t fall within
strict criteria does not serve that purpose.

Mozilla’s root store policy[18] says: “We will determine which CA
certificates are included in Mozilla's root program based on the risks
of such inclusion to typical users of our products.” To me this means
that if the risks of including a root certificate appear to outweigh the
benefits, then we should deny the root inclusion. There are credible
reports from multiple sources[19] providing reason to not trust the
DarkMatter organization to issue TLS certificates without constraints. I
think that the decision about DarkMatter should consider if the risk of
including DarkMatter’s root certificates outweighs the potential benefit
to consumers of Mozilla’s root store.

As always, I continue to appreciate your thoughtful and constructive input.

Thanks,
Kathleen

[1] https://wiki.mozilla.org/Modules/All#CA_Certificates
[2]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/hi3WDHlYAgAJ
[3]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/I8CYOScMBgAJ
[4] https://wiki.mozilla.org/CA/Dashboard
[5] https://wiki.mozilla.org/CA/Application_Verification#Public_Discussion
[6]
https://www.eff.org/deeplinks/2019/02/cyber-mercenary-groups-shouldnt-be-trusted-your-browser-or-anywhere-else
[7] https://bugzilla.mozilla.org/show_bug.cgi?id=476766
[8]
https://groups.google.com/d/msg/mozilla.dev.security.policy/QEwyx6TQ5TM/qzX_WsKwvIgJ
[9]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/rNWEMEkUAQAJ
[10]
https://groups.google.com/d/msg/mozilla.dev.security.policy/QEwyx6TQ5TM/c3GXKsASCX4J
[11]
https://blog.mozilla.org/security/2015/04/02/distrusting-new-cnnic-certificates/

[12]
https://groups.google.com/d/msg/mozilla.dev.security.policy/czwlDNbwHXM/Fj-LUvhVQYEJ
[13]
https://groups.google.com/d/msg/mozilla.dev.security.policy/mirZzYH5_pI/5LJ-X-XfIdwJ
[14] https://wiki.mozilla.org/CA/Incident_Dashboard
[15]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/ew5ZnJtVAgAJ

[16]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/IfewIb0hAgAJ
[17]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/joyWkf5TAgAJ
[18]
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/

[19]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/YiybcXciBQAJ



















It is loading more messages.
0 new messages