Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Mozilla Policy Requirements CA Incidents

596 views
Skip to first unread message

Ryan Sleevi

unread,
Oct 7, 2019, 2:53:11 PM10/7/19
to mozilla-dev-security-policy
In light of Wayne's many planned updates as part of version 2.7 of the
Mozilla Root Store Policy, and prompted by some folks looking at adding
linters, I recently went through and spot-checked some of the Mozilla
Policy-specific requirements to see how well CAs are doing at following
these.

I discovered five issues, below:

# Intermediates that do not comply with the EKU requirements

In September 2018 [1], Mozilla sent a CA Communications reminding CAs about
the changes in Policy 2.6.1. One specific change, called to attention in
ACTION 3, required the presence of EKUs for intermediates, and the
separation of e-mail and SSL/TLS from the intermediates. This requirement,
while new to Mozilla Policy, was not new to publicly trusted CAs, as it
matched an existing requirement from Microsoft's Root Program [2]. This
requirement was first introduced by Microsoft in July 2015, with their
Version 2.0 of their own policy.

It's a reasonable expectation to expect that all CAs in both Microsoft and
Mozilla's program would have been conforming to the stricter requirement of
Microsoft, which goes above-and-beyond the Baseline Requirements. However,
Mozilla still allowed a grandfathering in of existing intermediates,
setting the new requirement for their policy at 2019-01-01. Mozilla also
set forth certain exclusions to account for cross-signing.

Despite that, four CAs have violated this requirement in 2019:
* Microsoft: https://bugzilla.mozilla.org/show_bug.cgi?id=1586847
* Actalis: https://bugzilla.mozilla.org/show_bug.cgi?id=1586787
* QuoVadis: https://bugzilla.mozilla.org/show_bug.cgi?id=1586792
* NetLock: https://bugzilla.mozilla.org/show_bug.cgi?id=1586795

# Authority Key Identifier issues

RFC 5280, Section 4.2.1.1 [3], defines the Authority Key Identifier
extension. Within RFC 5280, it states that (emphasis added)

The identification MAY be based on ***either*** the
key identifier (the subject key identifier in the issuer's
certificate) ***or*** the issuer name and serial number.

That is, it provides an either/or requirement for this field. Despite this
not being captured in the updated ASN.1 module defined in RFC 5912 [4],
Mozilla Root Store Policy has, since Version 1.0 [5], included a
requirement that CAs MUST NOT issue certificates that have (emphasis added)
"incorrect extensions (e.g., SSL certificates that exclude SSL usage,
or ***authority
key IDs that include both the key ID and the issuer's issuer name and
serial number)***;"

In examining issuance, I found that one CA, trusted by Mozilla, regularly
violates this requirement:

* Camerfirma: https://bugzilla.mozilla.org/show_bug.cgi?id=1586860

# Thoughts

While I've opened CA incident issues for all five incidents, I am concerned
that requirements that have been clearly communicated, and reiterated, are
violated like this. The one exception to this is Microsoft, which at the
time of issuance was not yet a participant in the Mozilla Root CA Program
directly, although admittedly, it's concerning that they might have
violated their own Root Program requirements.

I'm not sure how we can better prevent such situations, especially when
they were clearly communicated and affirmatively acknowledged by the CAs in
question. I'd be concerned with any suggestions that only rules placed in
the Baseline Requirements should be followed; that would quite literally be
placing the cart before the horse, since browsers lead the BRs.

I'd love to understand people's thoughts about how to handle such
situations, and more generally, what can be done to better prevent such
situations going forward.


[1]
https://wiki.mozilla.org/CA/Communications#September_2018_CA_Communication
[2] https://aka.ms/rootcert 4.A.10
[3] https://tools.ietf.org/html/rfc5280#section-4.2.1.1
[4] https://tools.ietf.org/html/rfc5912
[5] https://wiki.mozilla.org/CA:CertificatePolicyV1.0

Jeremy Rowley

unread,
Oct 7, 2019, 7:06:33 PM10/7/19
to ry...@sleevi.com, mozilla-dev-security-policy
Interesting. I can't tell with the Netlock certificate, but the other three non-EKU intermediates look like replacements for intermediates that were issued before the policy date and then reissued after the compliance date. The industry has established that renewal and new issuance are identical (source?), but we know some CAs treat these as different instances. While that's not an excuse, I can see why a CA could have issues with a renewal compared to new issuance as changing the profile may break the underlying CA.

Note that revoking these CAs puts the CA back on issuing from the legacy ICA that was issued before the renewal. Depending on the reason for the reissue, that may be a less desirable outcome. I don't have a good answer on what to do in a circumstance like this (and I'm a bit biased probably since we have a relationship with Quovadis). However, there's probably something better than "trust" vs. "distrust" or "revoke" v "non-revoke", especially when it comes to an intermediate. I guess the question is what is the primary goal for Mozilla? Protect users? Enforce compliance? They are not mutually exclusive objectives of course, but the primary drive may influence how to treat issuing CA non-compliance vs. end-entity compliance.

Of the four, only Quovadis has responded to the incident with real information, and none of them have filed the required format or given sufficient information. Is it too early to say what happens before there is more information about what went wrong? Key ceremonies are, unfortunately, very manual beasts. You can automate a lot of it with scripting tools, but the process of taking a key out, performing a ceremony, and putting things a way is not automated due to the off-line root and FIPS 140-3 requirements.

BTW, I'm really liking how these issues are raised here in bulk by problem. This is a really nice format and gets the community involved in looking at what to do. I think it also helps identify common causes of problems.

Jeremy
_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Ryan Sleevi

unread,
Oct 7, 2019, 8:45:51 PM10/7/19
to Jeremy Rowley, mozilla-dev-security-policy, ry...@sleevi.com
On Mon, Oct 7, 2019 at 7:06 PM Jeremy Rowley <jeremy...@digicert.com>
wrote:

> Interesting. I can't tell with the Netlock certificate, but the other
> three non-EKU intermediates look like replacements for intermediates that
> were issued before the policy date and then reissued after the compliance
> date. The industry has established that renewal and new issuance are
> identical (source?), but we know some CAs treat these as different
> instances.


Source: Literally every time a CA tries to use it as an excuse? :)

My question is how we move past “CAs provide excuses”, and at what point
the same excuses fall flat?

While that's not an excuse, I can see why a CA could have issues with a
> renewal compared to new issuance as changing the profile may break the
> underlying CA.


That was Quovadis’s explanation, although with no detail to support that it
would break something, simply that they don’t review the things they sign.
Yes, I’m frustrated that CAs continue to struggle with anything that is not
entirely supervised. What’s the point of trusting a CA then?

However, there's probably something better than "trust" vs. "distrust" or
> "revoke" v "non-revoke", especially when it comes to an intermediate. I
> guess the question is what is the primary goal for Mozilla? Protect users?
> Enforce compliance? They are not mutually exclusive objectives of course,
> but the primary drive may influence how to treat issuing CA non-compliance
> vs. end-entity compliance.


I think a minimum goal is to ensure the CAs they trust are competent and
take their job seriously, fully aware of the risk they pose. I am more
concerned about issues like this which CAs like QuoVadis acknowledges they
would not cause.

The suggestion of a spectrum of responses fundamentally suggests root
stores should eat the risk caused by CAs flagrant violations. I want to
understand why browsers should continue to be left holding the bag, and why
every effort at compliance seems to fall on how much the browsers push.

Of the four, only Quovadis has responded to the incident with real
> information, and none of them have filed the required format or given
> sufficient information. Is it too early to say what happens before there is
> more information about what went wrong? Key ceremonies are, unfortunately,
> very manual beasts. You can automate a lot of it with scripting tools, but
> the process of taking a key out, performing a ceremony, and putting things
> a way is not automated due to the off-line root and FIPS 140-3
> requirements.


Yes, I think it’s appropriate to defer discussing what should happen to
these specific CAs. However, I don’t think it’s too early to begin to try
and understand why it continues to be so easy to find massive amounts of
misissuance, and why policies that are clearly communicated and require
affirmative consent is something CAs are still messing up. It suggests
trying to improve things by strengthening requirements isn’t helping as
much as needed, and perhaps more consistent distrusting is a better
solution.

In any event, having CAs share the challenges is how we do better.
Understanding how the CAs not affected prevent these issues is equally
important. We NEED CAs to be better here, so what’s the missing part about
why it’s working for some and failing for others?

I know it seems extreme to suggest to start distrusting CAs over this, but
every single time, it seems there’s a CA communication, affirmative
consent, and then failure. The most recent failure to disclose CAs is
equally disappointing and frustrating, and it’s not clear we have CAs
adequately prepared to comply with 2.7, no matter how much we try.

Jeremy Rowley

unread,
Oct 7, 2019, 11:45:57 PM10/7/19
to ry...@sleevi.com, mozilla-dev-security-policy
Speaking from a personal perspective -

This all makes sense, and, to be honest, the spectrum/grade idea isn’t a good or robust. Implementing something like that requires too many judgment questions about whether a CA belongs in box x vs. box y and what is the difference between those two boxes. I also get the frustration with certain issues, especially when the pop up among CAs – especially if the rule is well established.

I’ve been looking at the root causes of mis-issuance in detail (stating with DigiCert) and so far I’ve found they divide into a few buckets. 1) Where the CA relied on a third party for something and probably shouldn’t have, 2) where there was an internal engineering issue, 3) a manual process went bad, 4) software the CA relied on had an issue, or 5) the CA simply couldn’t/didn’t act in time. From the incidents I’ve categorized so far (still working on all the incidents for all CAs), the biggest buckets seem like engineering issues followed by manual process issues. For example, at DigiCert proper the engineering issues represent about 35% of the issues. (By DigiCert proper, I exclude the Sub CAs and Quovadis systems – this allows me to look exclusively at our internal operations compared to the operations of somewhat separate systems.) The next biggest issue is our failure to move fast enough (30%) followed by manual process problems (24%). DigiCert proper doesn’t use very much third party software in its CA so that tends to be our smallest bucket.

The division between these categories is interesting because some are less in control of the CA than others. For example, if primekey has an issue pretty much everyone has an issue since so many CAs use primekey at some level (DigiCert via Quovadis). The division is also somewhat arbitrary and based solely on the filed incident reports. However, what I’m looking for is whether the issues result from human error, insufficient implementation timelines, engineering issues, or software issues. I’m not ready to make a conclusion industry-wide.

The trend I’ve noticed at DigiCert is the percent of issues related to DigiCert manual processes is decreasing while the percent of engineering blips is increasing. This is a good trend as it means we are moving away from manual processes and into better automation. What else is interesting is the number of times we’ve had issues with moving too slow has dropped significantly over the last two year, which means we’ve seen substantial improvement in communication and handling of changes in industry standards. The number of issues increased, but I chalk that up to more transparency and scrutiny by the public (a good thing) than worse systems.

The net result is a nice report that we’re using internally (and will share externally) that shows where the biggest improvements have been made. We’re also hoping this data shows where we need to concentrate more. Right now, the data is showing more focus on engineering and unit tests to ensure all systems are updated when a guideline changes.

So why do I share this data now before it’s ready? Well, I think looking at this information can maybe help define possible solutions. Long and windy, but…

One resulting idea is that maybe you could require a report on improvements from each CA based on their issues? The annual audit could include a report similar to the above where the CA looks at the past year of their own mistakes and the other industry issues and evaluates how well they did compared to previous years. This report can also describe how the CA changed their system to comply with any new Mozilla or CAB Forum requirements. What automated process did they put in place to guarantee compliance? This part of the audit report can be used to reflect on the CA operations and make suggestions to the browsers on where they need to improve and where they need to automate. It can also be used to document one area of improvement they need to focus on.

Although this doesn’t cure immediate mis-issuances such does give better transparency into what CAs are doing to improve and exactly how they implemented the changes made to the Mozilla policy. A report like this also shifts the burden of dealing with issues to the community instead of the module owners and emphasizes the CA on fixing their systems and learning from mistakes. With the change to WebTrust audits, there’s an opportunity for more free-form reporting that can include this information. And this information has to be fare more interesting than reading about yet another individual who forgot to check a box in CCADB.

This is still more reactive than I’d like and sometimes requires a whole year before a CA gives information about the changes made to systems to reflect changes in policy. The report does get people thinking proactively about what they need to do to improve, which may, by itself, be a force for improvement. This also allows the community to evaluate a CA’s issues over the past year and how they addressed what went wrong compared to previous years and see what the CA is doing that will make the next year will be even better.


Jeremy



From: Ryan Sleevi <ry...@sleevi.com>
Sent: Monday, October 7, 2019 6:45 PM
To: Jeremy Rowley <jeremy...@digicert.com>
Cc: mozilla-dev-security-policy <mozilla-dev-s...@lists.mozilla.org>; ry...@sleevi.com
Subject: Re: Mozilla Policy Requirements CA Incidents

Ryan Sleevi

unread,
Oct 8, 2019, 2:02:42 PM10/8/19
to Jeremy Rowley, ry...@sleevi.com, mozilla-dev-security-policy
On the topic of root causes, there's also
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3425554 that was
recently published. I'm not sure if that was peer reviewed, but it does
provide an analysis of m.d.s.p and Bugzilla. I have some concerns about the
study methodology (for example, when incident reports became normalized is
relevant, as well as incident reporting where security researchers first
went to the CA), but I think it looks at root causes a bit holistically.

I recently shared on the CA/B Forum's mailing list another example of
"routine" violation:
https://cabforum.org/pipermail/servercert-wg/2019-October/001154.html

My concern is that, 7 years later, while I think that compliance has
marginally improved (largely due to things led by outside the CA ecosystem,
like CT and ZLint/Certlint), I think the answers/responses/explanations we
get are still falling into the same predictable buckets, and that concerns
me, because it's neither sustainable nor healthy for the ecosystem.


- We misinterpreted the requirements. It said X, but we thought it meant
Y (Often: even though there's nothing in the text to support Y, that's just
how we used to do business, and we're CAs so we know more than browsers
about what browsers expect from us)
- We weren't paying attention to the updates. We've now assigned people
to follow updates.
- We do X by saying our staff should do X. In this case, they forgot.
We've retrained our staff / replaced our staff / added more staff to
correct this.
- We had a bug. We did not detect the bug because we did not have tests
for this. We've added tests.
- We weren't sure if X was wrong, but since no one complained, we
assumed it was OK.
- Our auditor said it was OK
- Our vendor said it was OK

and so forth.

And then, in the responses, we generally see:

- These certificates are used in Very Important Systems, so even though
we said we'd comply, we cannot comply.
- We don't think X is actually bad. We think X should be OK, and it
should be Browsers that reject X if they don't like X (implicit: But they
should still trust our CA, even though we aren't doing what they want)
- Our vendor is not able to develop a fix in time, so we need more time.
- We agree that X is bad, and has always been prohibited, but we need
more time to actually implement a fix (because we did not plan/budget/staff
to actually handle issues of non-compliance)

and so forth.

It's tiring and exhausting because we're hearing the same stuff. The same
patterns that CAs were using when they'd issue MITM certs to companies:
"Oh, wait, you mean't DON'T issue MITM certs? We didn't realize THAT'S what
you meant" (recall, this was at least one CA's response when caught issuing
MITM certs).

I'm exasperated because we're seeing CAs do things like not audit sub-CAs,
but leaving all the risk to be accepted by browsers, because it's too
hard/complex to migrate. We're seeing things like CA's not follow policy
requirements, but then correcting those issues is risky because now they've
issued a bunch of certs and it's painful to have to replace them all.

If we go back to that classic Dan Geer talk,
https://cseweb.ucsd.edu/~goguen/courses/275f00/geer.html , every time a CA
issues a certificate, they've now externalized the risk onto browsers/root
stores for that certificate lifetime. It's left to the ecosystem to detect
and clean up the mess, while the CA/subscriber gets the full benefits of
the issuance. It's a system of incentives that is completely misaligned,
and we've seen it now for the past decade: The CA benefits from the
(mis)issuance, and extracts value until it's detected, and then the cost of
cleanup is placed on the browser/Root Program that expects CAs to actually
conform. If the Browser doesn't enforce, or consistently enforce, then we
get back to the "Race to the bottom" that plagued the CA industry, as
"Requirements" become "Suggestions" or "Nice ideas". Yet if the Browser
does enforce, they suffer the blame from the Subscriber, who is unhappy
that the thing they bought no longer works.

In all of this time, it doesn't seem like we're making much progress on
systemic understanding and prevention. If that's an unfair statement, then
it means that some CAs are progressing, and some aren't, so how do we help
the ones that aren't? At what point do we go from education to removal of
trust? Where is the line when the same set of responses have been used so
much that it's no longer reasonable? When this ecosystem moves at a snail's
pace, due to CAs' challenges in updating systems and the long lifetime of
certificates, the feedback loop is large, and CAs can exploit that
asymmetry until they're detected. That may sound like I'm ascribing
intentional malice, when I'm mainly just talking about the perverse
incentives here that are hindering meaningful improvement.

While I appreciate your suggestion of more transparency, and I'm notably
all for it, this wouldn't help with, for example, QuoVadis' response to the
issue. To borrow from Donald Rumsfeld, the set of issues with any single CA
are, from the browser perspective, the "unknown unknowns". Such a report
would not tell us, for example, that QuoVadis viewed renewal and issuance
as separate and independent from requirements. Unless we had all of their
processes and procedures in front of us, to review the diff, we wouldn't
spot that there was an "issuance playbook" and a "renewal playbook". Of
course, there might not have even been a "renewal" playbook until that
matter came up, so if they created it new, we also wouldn't have detected
it.

In theory, the incident reports are meant to help the ecosystem improve.
But if we see egregiously bad incident reports, as I think we have, or
incident reports that are equivalent to stonewalling for answers by trying
to give the shortest, least possible information, and we move to take
sanction on those CAs, we only discourage future incident reporting.

To bring this back, now, to the original topic at hand: What should we be
doing when requirements are phased in, with years of notice, advanced
communication, and they're still violated? What should we be doing when
clear-cut requirements are violated?

I see a few options:
(a) Accept that what we're doing is not enough, and do something different.
If so, what would be different, compared to everything that's been tried?
That was the original gist of the first message.
(b) Accept that what we're doing is enough, and the CAs that are failing
are simply not up to the task expected of them, and removing them is the
only way to correct this. This was the gist of the second message.
(c) Accept that this system is inherently flawed, and the incentive
structures misaligned such that this is a natural expectation of any
complex system. If that's the case, perhaps we should more holistically
look to replace the system?

This is relevant with the Policy 2.7 update. With all of the effort to
provide added clarity and improved requirements, do we have reason to
believe that CAs will adopt and follow it? The past approach is to send a
CA communication and require affirmative consent. That clearly is not
working (for some CAs). Suggestions of doing it in the Forum are sometimes
raised, but that clearly (per the related message) is also failing. So, is
there something different to try? I like the suggestion of listing
everything that the CA is changing as part of their operation, although I
don't think it will prevent these issues (back to "unknown unknowns"). I
don't have much faith that the auditors will catch these issues, BR or
otherwise. So... what do we have to make sure Policy 2.7 goes off smoothly?

>

Paul Walsh

unread,
Oct 8, 2019, 2:44:35 PM10/8/19
to ry...@sleevi.com, Jeremy Rowley, mozilla-dev-security-policy
I read Jeremy’s last response before posting my comment.

Dear Ryan,

It would help a great deal, if you tone down your constant insults towards the entire CA world. Questioning whether you should trust any CA is a bridge too far.

Instead, why don’t you try to focus on specific issues with specific CAs, or specific issues with most CAs. I don’t think you have a specific issue with every CA in the world.

If specific CAs fail to do what you think is appropriate for browser vendors, perhaps you need to implement new, or improve existing audits? Propose solutions, implement checks and execute better reviews. Then iterate until everyone gets it right.

I could write a book on how Google is the least “trustworthy” browser vendor on the planet. I could write another book about how Google is constantly contradicting its own advice and best practices. One example is where Google tells us to focus on the part of the URL that matters most - the domain name. But over here we have AMP, where URLs go to die a slow painful death within Google’s closed system, adding no value to the world outside of advertising. The list is endless when it comes to the lack of respect for people’s privacy from *some* browser vendors. Not all browsers are evil. Not all CAs are evil.

So, please can you get off your high horse and stick to facts and propose solutions instead of constantly making personal insults and bringing up problems without implementing new processes to address same.

Can we just keep in mind that we’re all trying to do our job. No company is perfect. No process is perfect. No technology solution is perfect.

Peace!

- Paul

p.s. I don’t work for a CA and never have. And I believe there are many weaknesses that could can should be better addressed.



> On Oct 7, 2019, at 5:45 PM, Ryan Sleevi via dev-security-policy <dev-secur...@lists.mozilla.org> wrote:
>
> On Mon, Oct 7, 2019 at 7:06 PM Jeremy Rowley <jeremy...@digicert.com>

Ryan Sleevi

unread,
Oct 8, 2019, 3:10:21 PM10/8/19
to Paul Walsh, Jeremy Rowley, mozilla-dev-security-policy
On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh <pa...@metacert.com> wrote:

> Dear Ryan,
>
> It would help a great deal, if you tone down your constant insults towards
> the entire CA world. Questioning whether you should trust any CA is a
> bridge too far.


> Instead, why don’t you try to focus on specific issues with specific CAs,
> or specific issues with most CAs. I don’t think you have a specific issue
> with every CA in the world.


> If specific CAs fail to do what you think is appropriate for browser
> vendors, perhaps you need to implement new, or improve existing audits?
> Propose solutions, implement checks and execute better reviews. Then
> iterate until everyone gets it right.
>

Paul,

I appreciate your response, even if I believe it's largely off-topic,
deeply confused, and personally insulting.

This thread is acknowledging there are systemic issues, that it's not with
specific CAs, and that the solutions being put forward aren't working, and
so we need better solutions. It's also being willing to acknowledge that if
we can't find systemic fixes, it may be that we have a broken system, and
we should not be afraid of looking to improve or replace the system.

Perhaps you (incorrectly) read "CAs" to mean "Every CA in the world", when
it's just a plurality of "more than one CA". That's a bias on the reader's
part, and suggesting that every plurality be accompanied by a qualified
("Some", "most") is just tone policing rather than engaging on substance.

That said, it's entirely inappropriate to chastise me for highlighting
issues of non-compliance, and attempt to identify the systemic issue
underneath it. It's also entirely inappropriate to insist that I personally
solve the issue, especially when significant effort has been expended to do
address these issues so far, which continue to fail without much
explanation as to why they're failing. Suggesting that we should accept
regular failures and just deal with it, unfortunately, has no place in
reasonable or rational conversation about how to improve things. That's
because such a position is not interested in finding solutions, or
improving, but in accepting the status quo.

If you have suggestions on why these systemic issues are still happening,
despite years of effort to improve them, I welcome them. However, there's
no place for reasonable discussion if you don't believe we should have open
and frank conversations about issues, about the misaligned incentives, or
about how existing efforts to prevent these incidents by Browsers are
falling flat.

Paul Walsh

unread,
Oct 8, 2019, 3:21:53 PM10/8/19
to ry...@sleevi.com, Jeremy Rowley, mozilla-dev-security-policy
Ryan,

You just proved me right by saying I’m confused because I hold an opinion about how you conduct yourself when collaborating with industry stakeholders. My observations are the same across the board. I don’t think I’m confused. But you’re welcome to disagree with me. And, it’s not off-topic. We should be respectful when communicating in forums like this. I think your communication is sometimes disrespectful.

You also tell people they are confused about bylaws and other documents when they’re in disagreement with you. It’s possible for someone to fully understand and appreciate specific guidelines and disagree with you at the same time.

I’ve contributed to many W3C specifications over the years - I co-founded two, including the Mobile Web Initiative. I was also Chair of BIMA.co.uk for three years. My point is this, when contributing to industry initiatives, I learned that there will always be instances where individuals need to be reminded to show respect to others when communicating differences of opinion - especially when there is a strong chance of culture differences. I don’t mind being reminded from time to time. Nobody is perfect.

You can take this feedback, or leave it. Your call.

- Paul




> On Oct 8, 2019, at 12:09 PM, Ryan Sleevi <ry...@sleevi.com> wrote:

Ryan Sleevi

unread,
Oct 8, 2019, 3:44:43 PM10/8/19
to Paul Walsh, Ryan Sleevi, Jeremy Rowley, mozilla-dev-security-policy
Paul,

If you'd like to continue this conversation, might I respectfully ask you
take it elsewhere from this thread? It does not seem you're interested in
finding solutions for the issues, and you've continued to shift your
message, so perhaps it might be better to continue that discussion
elsewhere?

Thanks.

Matthew Hardeman

unread,
Oct 8, 2019, 3:51:33 PM10/8/19
to Ryan Sleevi, Paul Walsh, mozilla-dev-security-policy, Jeremy Rowley
On Tue, Oct 8, 2019 at 2:10 PM Ryan Sleevi via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh <pa...@metacert.com> wrote:
>
> so we need better solutions. It's also being willing to acknowledge that if
> we can't find systemic fixes, it may be that we have a broken system, and
> we should not be afraid of looking to improve or replace the system.
>

Communication styles aside, I believe there's merit to far more serious
community consideration of the notion that either the system overall or the
standard for expectations of the system's performance are literally
broken. There's probably a better forum for that discussion than this
thread, but I echo that I believe the notion has serious merit.

Paul Walsh

unread,
Oct 8, 2019, 3:56:01 PM10/8/19
to Matthew Hardeman, Ryan Sleevi, mozilla-dev-security-policy, Jeremy Rowley

> On Oct 8, 2019, at 12:51 PM, Matthew Hardeman <mhar...@gmail.com> wrote:
>
>
> On Tue, Oct 8, 2019 at 2:10 PM Ryan Sleevi via dev-security-policy <dev-secur...@lists.mozilla.org <mailto:dev-secur...@lists.mozilla.org>> wrote:
> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh <pa...@metacert.com <mailto:pa...@metacert.com>> wrote:
>
> so we need better solutions. It's also being willing to acknowledge that if
> we can't find systemic fixes, it may be that we have a broken system, and
> we should not be afraid of looking to improve or replace the system.
>
> Communication styles aside, I believe there's merit to far more serious community consideration of the notion that either the system overall or the standard for expectations of the system's performance are literally broken. There's probably a better forum for that discussion than this thread, but I echo that I believe the notion has serious merit.

[PW] It looks like I said those words above, but I didn’t :)

Matthew Hardeman

unread,
Oct 8, 2019, 4:04:45 PM10/8/19
to Paul Walsh, Ryan Sleevi, mozilla-dev-security-policy, Jeremy Rowley
My apologies. I messed up when trimming that down. I was quoting Ryan
Sleevi there.

On Tue, Oct 8, 2019 at 2:55 PM Paul Walsh <pa...@metacert.com> wrote:

>
> On Oct 8, 2019, at 12:51 PM, Matthew Hardeman <mhar...@gmail.com> wrote:
>
>
> On Tue, Oct 8, 2019 at 2:10 PM Ryan Sleevi via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:

Paul Walsh

unread,
Oct 8, 2019, 4:06:40 PM10/8/19
to Ryan Sleevi, Jeremy Rowley, mozilla-dev-security-policy

> On Oct 8, 2019, at 12:44 PM, Ryan Sleevi <ry...@sleevi.com> wrote:
>
> Paul,

[snip]

> It does not seem you're interested in finding solutions for the issues,

[PW] You are mixing things up Ryan. I am interested in finding solution to issues. I specifically kept my message on point, which was your tone and approach to communication - this is equally important to the content you put forward. My point was made and you obviously didn’t receive it well - I’m ok with that. Most people don’t respond well to criticism.

I will only contribute proposed solutions for issues where I posses deep domain expertise - moderating and chairing standards and best practices is one area, hence my contribution.

> and you've continued to shift your message, so perhaps it might be better to continue that discussion elsewhere?

[PW] In my opinion, this is the right place. You don’t get to dictate where and when. The alternative would be to walk into a broom cupboard and scream at the wall.

I won’t comment on this matter any further as I think we’ve labored the subject and I don’t want to take up people’s time any further.

- Paul


>
> Thanks.
>
> On Tue, Oct 8, 2019 at 3:21 PM Paul Walsh <pa...@metacert.com <mailto:pa...@metacert.com>> wrote:
> Ryan,
>
> You just proved me right by saying I’m confused because I hold an opinion about how you conduct yourself when collaborating with industry stakeholders. My observations are the same across the board. I don’t think I’m confused. But you’re welcome to disagree with me. And, it’s not off-topic. We should be respectful when communicating in forums like this. I think your communication is sometimes disrespectful.
>
> You also tell people they are confused about bylaws and other documents when they’re in disagreement with you. It’s possible for someone to fully understand and appreciate specific guidelines and disagree with you at the same time.
>
> I’ve contributed to many W3C specifications over the years - I co-founded two, including the Mobile Web Initiative. I was also Chair of BIMA.co.uk <http://bima.co.uk/> for three years. My point is this, when contributing to industry initiatives, I learned that there will always be instances where individuals need to be reminded to show respect to others when communicating differences of opinion - especially when there is a strong chance of culture differences. I don’t mind being reminded from time to time. Nobody is perfect.
>
> You can take this feedback, or leave it. Your call.
>
> - Paul
>
>
>
>
>> On Oct 8, 2019, at 12:09 PM, Ryan Sleevi <ry...@sleevi.com <mailto:ry...@sleevi.com>> wrote:

Ryan Sleevi

unread,
Oct 8, 2019, 5:03:35 PM10/8/19
to Paul Walsh, Ryan Sleevi, Jeremy Rowley, mozilla-dev-security-policy
To try and minimize some of the tone-policing ad hominem, arguments from
authority, and thread-jacking, especially on-list, let's circle back to the
subject of this thread, and hopefully you can offer constructive solutions
there.

Is my understanding correct that your concern is you don't believe it's
appropriate to discuss concerns about systemic patterns of misissuance, to
highlight specific CAs that have demonstrated misissuance despite every
reasonable effort to prevent it, and to suggest that it's reasonable to
consider solutions such as either distrusting CAs (If this is simply "a few
bad apples") or systemic changes (if this is "all CAs")? Before you veered
well off-topic into tone policing, it did seem that the gist of your
argument was that you don't think it's reasonable or appropriate to suggest
that removing trust in CAs might be an appropriate remediation to sustained
patterns of failure?

In the spirit of finding productive solutions, rather than hijacking
threads, perhaps you could offer suggestions on what you believe could or
should have been done to prevent the issues like we saw. As noted in the
original message, Mozilla sent a CA communication reminding CAs of the
upcoming change, and requiring they positively confirm that they would
abide by it. However, that still failed. This was not a new requirement
Mozilla was introducing, but one introduced by Microsoft some time ago.
Every one of the CAs responded that they understood the requirement and
would abide by it.

What, in your opinion, could or should have been done to prevent this?

If your view is that nothing can prevent it, then yes, we'll disagree, and
a position of accepting those flaws without attempting to prevent them is
likely to find no purchase here.
If your view is that something could have been done, but wasn't, then it'd
be useful to understand what was missing.

It's unclear if you had thoughts to share on the topic, but if you'd like
to suggest it's inappropriate to distrust CAs, or to question whether there
are systemic flaws in the CA ecosystem if such events are functionally
inevitable, then my hope is you'd have solutions you can offer, and ideas
that have not yet been considered. Those would be examples of productive
contributions.

Wayne Thayer

unread,
Oct 8, 2019, 5:20:51 PM10/8/19
to Ryan Sleevi, Jeremy Rowley, mozilla-dev-security-policy
Ryan,

Thank you for pointing out these incidents, and for raising the meta-issue
of policy compliance. We saw similar issues with CP/CPS compliance to
changes in the 2.5 and 2.6 versions of policy, with little explanation
beyond "it's hard to update our CPS" and "oops". Historically, our approach
has been to strive to communicate policy updates to CAs with the assumption
that they will happily comply with all of the requirements they are aware
of. I don't think that's a bad thing to continue, but I agree it is is not
working.

Having said that, I do recognize that translating "Intermediates must
contain EKUs" into "don't renew this particular certificate" across an
organization isn't as easy as it sounds. I'd be really interested in
hearing how CAs are successfully managing the task of adapting to new
requirements and if there is something we can do to encourage all CAs to
adopt best practices in this regard. Our reactive options short of outright
distrust are limited- so I think it would be worthwhile to focus on new
preventive measures.

Thanks,

Wayne

Jeremy Rowley

unread,
Oct 8, 2019, 6:42:15 PM10/8/19
to Wayne Thayer, Ryan Sleevi, mozilla-dev-security-policy
Tackling Sub CA renewals/issuance from a compliance perspective is difficult because of the number of manual components involved. You have the key ceremony, the scripting, and all of the formal process involved. Because the root is stored in an offline state and only brought out for a very intensive procedure, there is lots that can go wrong compared to end-entity certs, including bad profiles and bad coding. These events are also things that happen rarely enough that many CAs might not have well defined processes around. A couple things we’ve done to eliminate issues include:


1. 2 person review over the profile + a formal sign-off from the policy authority
2. A standard scripting tool for generating the profile to ensure only the subject info in the cert changes. This has basic some linting.
3. We issue a demo cert. This cert is exactly the same as the cert we want to issue but it’s not publicly trusted and includes a different serial. We then review the demo cert to ensure profile accuracy. We should run this cert through a linter (added to my to-do list).

We used to treat renewals separate from new issuance. I think there’s still a sense that they “are” different, but that’s been changing. I’m definitely looking forward to hearing what other CAs do.

Jeremy


From: Wayne Thayer <wth...@mozilla.com>
Sent: Tuesday, October 8, 2019 3:20 PM
To: Ryan Sleevi <ry...@sleevi.com>
Cc: Jeremy Rowley <jeremy...@digicert.com>; mozilla-dev-security-policy <mozilla-dev-s...@lists.mozilla.org>
Subject: Re: Mozilla Policy Requirements CA Incidents

Ryan,

Thank you for pointing out these incidents, and for raising the meta-issue of policy compliance. We saw similar issues with CP/CPS compliance to changes in the 2.5 and 2.6 versions of policy, with little explanation beyond "it's hard to update our CPS" and "oops". Historically, our approach has been to strive to communicate policy updates to CAs with the assumption that they will happily comply with all of the requirements they are aware of. I don't think that's a bad thing to continue, but I agree it is is not working.

Having said that, I do recognize that translating "Intermediates must contain EKUs" into "don't renew this particular certificate" across an organization isn't as easy as it sounds. I'd be really interested in hearing how CAs are successfully managing the task of adapting to new requirements and if there is something we can do to encourage all CAs to adopt best practices in this regard. Our reactive options short of outright distrust are limited- so I think it would be worthwhile to focus on new preventive measures.

Thanks,

Wayne

dev-secur...@lists.mozilla.org<mailto:dev-secur...@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-security-policy

Erwann Abalea

unread,
Oct 8, 2019, 6:43:24 PM10/8/19
to mozilla-dev-s...@lists.mozilla.org
Bonsoir,

Le lundi 7 octobre 2019 20:53:11 UTC+2, Ryan Sleevi a écrit :
[...]
If this is to be read as an exclusive choice, then how do you interpret third paragraph of clause 4.2:

Conforming CAs MUST support key identifiers (Sections 4.2.1.1 and
4.2.1.2), basic constraints (Section 4.2.1.9), key usage (Section
4.2.1.3), and certificate policies (Section 4.2.1.4) extensions.

Does that mean that CAs MUST exclusively choose between keyIdentifier or issuerName+serialNumber, while at the same time use keyIdentifier? Just get rid of the issuerName+serialNumber, then.

Now go down to Appendix A.2 containing the ASN.1 module, you'll find some comments in the definition (that's the way lazy ASN.1 writers try to express constraints):

AuthorityKeyIdentifier ::= SEQUENCE {
keyIdentifier [0] KeyIdentifier OPTIONAL,
authorityCertIssuer [1] GeneralNames OPTIONAL,
authorityCertSerialNumber [2] CertificateSerialNumber OPTIONAL }
-- authorityCertIssuer and authorityCertSerialNumber MUST both
-- be present or both be absent

Here, again, the constraint is on presence or absence of both issuer and serial, nothing on presence of both keyIdentifier and the (issuer,serial) tuple.

> Despite this
> not being captured in the updated ASN.1 module defined in RFC 5912 [4],
> Mozilla Root Store Policy has, since Version 1.0 [5], included a
> requirement that CAs MUST NOT issue certificates that have (emphasis added)
> "incorrect extensions (e.g., SSL certificates that exclude SSL usage,
> or ***authority
> key IDs that include both the key ID and the issuer's issuer name and
> serial number)***;"

Isn't it strange that while RFC5912 modified the ExtendedKeyIdentifier definition to add ASN.1 constraints on presence or absence of both authorityCertIssuer/authorityCertSerialNumber elements, nothing has been added to extend the same constraint forbidding presence of keyIdentifier and issuer+serial? It would have been really easy if it was intended that way.

I'll let participants read X.509 clause 8.2.2.1/9.2.2.1/12.2.2.1 (depending on the edition you're reading) to discover that the ASN.1 definition is equal to RFC5912's one since 1997 (first edition of X.509v3), and find that both keyIdentifier and issuer+serial are explicitly permitted (given that all is consistent). That's 6 successive revisions since, and it hasn't changed.


Now, if a strict compliancy to RFC5280 is required, I'd like to understand how Mozilla NSS can be compliant with the following paragraph, taken from RFC5280 clause 4.2:

At a minimum, applications conforming to this profile MUST recognize
the following extensions: key usage (Section 4.2.1.3), certificate
policies (Section 4.2.1.4), subject alternative name (Section
4.2.1.6), basic constraints (Section 4.2.1.9), name constraints
(Section 4.2.1.10), policy constraints (Section 4.2.1.11), extended
key usage (Section 4.2.1.12), and inhibit anyPolicy (Section
4.2.1.14).

To my knowledge, unless this has changed in the past months, NSS doesn't properly handle CertificatePolicies, PolicyConstraints, and InhibitAnyPolicy.

And also with the following paragraph taken from RFC5280 clause 6:

This section describes an algorithm for validating certification
paths. Conforming implementations of this specification are not
required to implement this algorithm, but MUST provide functionality
equivalent to the external behavior resulting from this procedure.
Any algorithm may be used by a particular implementation so long as
it derives the correct result.

when a CA contains an ExtendedKeyUsage extension. You know that the algorithm described in this section doesn't use this extension and thus doesn't limit a certificate chain based on this extension.


Cordialement,
Erwann.

Ryan Sleevi

unread,
Oct 8, 2019, 7:49:39 PM10/8/19
to Jeremy Rowley, Wayne Thayer, Ryan Sleevi, mozilla-dev-security-policy
On Tue, Oct 8, 2019 at 6:42 PM Jeremy Rowley <jeremy...@digicert.com>
wrote:

> Tackling Sub CA renewals/issuance from a compliance perspective is
> difficult because of the number of manual components involved. You have the
> key ceremony, the scripting, and all of the formal process involved.
> Because the root is stored in an offline state and only brought out for a
> very intensive procedure, there is lots that can go wrong compared to
> end-entity certs, including bad profiles and bad coding. These events are
> also things that happen rarely enough that many CAs might not have well
> defined processes around. A couple things we’ve done to eliminate issues
> include:
>
>
>
> 1. 2 person review over the profile + a formal sign-off from the
> policy authority
> 2. A standard scripting tool for generating the profile to ensure only
> the subject info in the cert changes. This has basic some linting.
> 3. We issue a demo cert. This cert is exactly the same as the cert we
> want to issue but it’s not publicly trusted and includes a different
> serial. We then review the demo cert to ensure profile accuracy. We should
> run this cert through a linter (added to my to-do list).
>
>
>
> We used to treat renewals separate from new issuance. I think there’s
> still a sense that they “are” different, but that’s been changing. I’m
> definitely looking forward to hearing what other CAs do.
>

It's not clear: Are you suggesting the the configuration of sub-CA profiles
is more, less, or the as risky as for end-entity certificates? It would
seem that, regardless, the need for review and oversight is the same, so
I'm not sure that #1 or #2 would be meaningfully different between the two
types of certificates?

That said, of the incidents, only two of those were potentially related to
the issuance of new versions of the intermediates (Actalis and QuoVadis).
The other two were new issuance.

So I don't think we can explain it as entirely around renewals. I
definitely appreciate the implicit point you're making: which is every
manual action of a CA, or more generally, every action that requires a
human be involved, is an opportunity for failure. It seems that we should
replace all the humans, then, to mitigate the failure? ;)

To go back to your transparency suggestion, would we have been better if:
1) CAs were required to strictly disclose every single certificate profile
for everything "they sign"
2) Demonstrate compliance by updating their CP/CPS to the new profile, by
the deadline required. That is, requiring all CAs update their CP/CPS prior
to 2019-01-01.

Would this prevent issues? Maybe - only to extent CAs view their CP/CPS as
authoritative, and strictly review what's on them. I worry that such a
solution would lead to the "We published it, you didn't tell us it was bad"
sort of situation (as we've seen with audit reports), which then further
goes down a rabbit-hole of requiring CP/CPS be machine readable, and then
tools to lint CP/CPS, etc. By the time we've added all of this complexity,
I think it's reasonable to ask if the problem is not the humans in the
loop, but the wrong humans (i.e. going back to distrusting the CA). I know
that's jumping to conclusions, but it's part of what taking an earnest look
at these issues are: how do we improve things, what are the costs, are
there cheaper solutions that provide the same assurances?

Ryan Sleevi

unread,
Oct 8, 2019, 8:11:01 PM10/8/19
to Erwann Abalea, mozilla-dev-security-policy
(Sorry for the second e-mail, Erwann still having some Groups issues - this
will be the one that shows up on the list)

On Tue, Oct 8, 2019 at 6:43 PM Erwann Abalea via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> If this is to be read as an exclusive choice, then how do you interpret
> third paragraph of clause 4.2:
>
> Conforming CAs MUST support key identifiers (Sections 4.2.1.1 and
> 4.2.1.2), basic constraints (Section 4.2.1.9), key usage (Section
> 4.2.1.3), and certificate policies (Section 4.2.1.4) extensions.
>
> Does that mean that CAs MUST exclusively choose between keyIdentifier or
> issuerName+serialNumber, while at the same time use keyIdentifier? Just get
> rid of the issuerName+serialNumber, then.
>

English language plurality? That is talking about "subject key identifiers
and authority key identifiers" - not about keyIdentifiers. From that same
section you're quoting (4.2), the phrase is repeated for applications, but
with a slight twist:

In addition, applications conforming to this profile SHOULD recognize
the authority and subject key identifier (Sections 4.2.1.1 and
4.2.1.2) and policy mappings (Section 4.2.1.5) extensions.

It was just an editorial quirk by the RFC editor relating to there being
"two" items on the latter list, but "four" items on the former.


> > Despite this
> > not being captured in the updated ASN.1 module defined in RFC 5912 [4],
> > Mozilla Root Store Policy has, since Version 1.0 [5], included a
> > requirement that CAs MUST NOT issue certificates that have (emphasis
> added)
> > "incorrect extensions (e.g., SSL certificates that exclude SSL usage,
> > or ***authority
> > key IDs that include both the key ID and the issuer's issuer name and
> > serial number)***;"
>
> Isn't it strange that while RFC5912 modified the ExtendedKeyIdentifier
> definition to add ASN.1 constraints on presence or absence of both
> authorityCertIssuer/authorityCertSerialNumber elements, nothing has been
> added to extend the same constraint forbidding presence of keyIdentifier
> and issuer+serial? It would have been really easy if it was intended that
> way.
>

I don't think that "strange" is relevant here, particularly as it relates
to Mozilla policy? I was trying to head off that argument, but you jumped
full into it with the reference to A.2. That is, even if you want to
suggest that A.2. of 5280 permits it, or that 5912 permits it, or that
X.509 (which no browser really pays attention to, ITU-T being what it is),
that argument would be a moot argument in the presence of the Policy 1.0.


> Now, if a strict compliancy to RFC5280 is required, I'd like to understand
> how Mozilla NSS can be compliant with the following paragraph, taken from
> RFC5280 clause 4.2:
>
> At a minimum, applications conforming to this profile MUST recognize
> the following extensions: key usage (Section 4.2.1.3), certificate
> policies (Section 4.2.1.4), subject alternative name (Section
> 4.2.1.6), basic constraints (Section 4.2.1.9), name constraints
> (Section 4.2.1.10), policy constraints (Section 4.2.1.11), extended
> key usage (Section 4.2.1.12), and inhibit anyPolicy (Section
> 4.2.1.14).
>
> To my knowledge, unless this has changed in the past months, NSS doesn't
> properly handle CertificatePolicies, PolicyConstraints, and
> InhibitAnyPolicy.
>

It's entirely consistent to require CAs to conform to the RFC 5280 profile,
without requiring applications like NSS conform to the 5280 profile. It's
not even a double-standard: it's two entirely separable pieces. So it's not
worth responding to.

Jeremy Rowley

unread,
Oct 8, 2019, 8:16:24 PM10/8/19
to ry...@sleevi.com, Wayne Thayer, mozilla-dev-security-policy
I think requiring publication of profiles for certs is a good idea. It’s part of what I’ve wanted to publish as part of our CPS. You can see most of our profiles here: https://content.digicert.com/wp-content/uploads/2019/07/Digicert-Certificate-Profiles.pdf, but it doesn’t include ICAs right now. That was an oversight that we should fix. Publication of profiles probably won’t prevent issues related to engineering snafu’s or more manual procedures. However, publication may eliminate a lot of the disagreement on BR/Mozilla policy wording. That’s a lot more work though for the policy owners so the community would probably need to be more actively involved in reviewing profiles. Requiring publication at least gives the public a chance to review the information, which may not exist today.

The manual component definitely introduces a lot of risk in sub CA creation, and the explanation I gave is broader than renewals. It’s more about the risks currently associated with Sub CAs. The difference between renewal and new issuance doesn’t exist at DigiCert – we got caught on that issue a long time ago.


From: Ryan Sleevi <ry...@sleevi.com>
Sent: Tuesday, October 8, 2019 5:49 PM
To: Jeremy Rowley <jeremy...@digicert.com>
Cc: Wayne Thayer <wth...@mozilla.com>; Ryan Sleevi <ry...@sleevi.com>; mozilla-dev-security-policy <mozilla-dev-s...@lists.mozilla.org>
Subject: Re: Mozilla Policy Requirements CA Incidents



Ryan Sleevi

unread,
Oct 8, 2019, 8:57:37 PM10/8/19
to Jeremy Rowley, ry...@sleevi.com, Wayne Thayer, mozilla-dev-security-policy
On Tue, Oct 8, 2019 at 8:16 PM Jeremy Rowley <jeremy...@digicert.com>
wrote:

> I think requiring publication of profiles for certs is a good idea. It’s
> part of what I’ve wanted to publish as part of our CPS. You can see most of
> our profiles here:
> https://content.digicert.com/wp-content/uploads/2019/07/Digicert-Certificate-Profiles.pdf,
> but it doesn’t include ICAs right now. That was an oversight that we should
> fix.
>

FWIW, if you want inspiration for your updates, I'm super enamored with the
following CP/CPSes and their approach to disclosure:
- Izenpe:
http://www.izenpe.eus/contenidos/informacion/doc_especifica/en_def/adjuntos/Certificates_Profile.pdf
- SwissSign: http://repository.swisssign.com/SwissSign-Gold-CP-CPS.pdf (See
7.1)
- Sectigo: https://sectigo.com/uploads/files/Sectigo-CPS-v5.1.5.pdf (see
Appendix C)


> Publication of profiles probably won’t prevent issues related to
> engineering snafu’s or more manual procedures. However, publication may
> eliminate a lot of the disagreement on BR/Mozilla policy wording. That’s a
> lot more work though for the policy owners so the community would probably
> need to be more actively involved in reviewing profiles. Requiring
> publication at least gives the public a chance to review the information,
> which may not exist today.
>
>
>
> The manual component definitely introduces a lot of risk in sub CA
> creation, and the explanation I gave is broader than renewals. It’s more
> about the risks currently associated with Sub CAs. The difference between
> renewal and new issuance doesn’t exist at DigiCert – we got caught on that
> issue a long time ago.
>

Right, I don't discount that manual issuance is hard. For example, 100% of
Amazon Trust Service's incidents have been related to manual issuance, and
not necessarily sub-CAs (
https://bugzilla.mozilla.org/show_bug.cgi?id=1569266 ,
https://bugzilla.mozilla.org/show_bug.cgi?id=1574594 ,
https://bugzilla.mozilla.org/show_bug.cgi?id=1525710 ). I highlight this,
because Amazon has generally been extremely on-the-ball in tooling and
infrastructure to detect issues (e.g. certlint), and yet were still bitten
by when it gets to manual issues.

Yet, going back to the original problem: do we believe that the CA
communications are sufficient to raise awareness such that when a CA is
implementing a manual review process, they'll implement it correctly? If we
don't, then what we can do to improve. If we do, then what should we do
when CAs drop the ball?

>

Ryan Sleevi

unread,
Oct 14, 2019, 6:12:45 PM10/14/19
to Ryan Sleevi, Jeremy Rowley, Wayne Thayer, mozilla-dev-security-policy
In the spirit of improving transparency, I've gone and filed
https://github.com/mozilla/pkipolicy/issues/192 , which is specific to
auditors.

However, I want to highlight this model (the model used by the US Federal
PKI), because it may also provide a roadmap for dealing with issues like
this / those called by policy changes. Appendix C of those annual
requirements for the US Federal PKI includes a number of useful
requirements (really, all of them are in line with things we've discussed
here), but two particularly relevant requirements are:

*Guidance*: Previous year findings Did the auditor review findings from
previous year and ensure all findings were corrected as proposed during the
previous audit?
*Commentary*: Often, the auditor sees an Audit Correction Action Plan,
POA&M, or other evidence that the organization has recognized audit
findings and intends to correct them, but the auditor is not necessarily
engaged to assess the corrections at the time they are applied. The auditor
should review that all proposed corrections have addressed the previous
year’s findings.

*Guidance*: Changes Because the FPKI relies on a mapped CP and/or CPS for
comparable operations, has the auditor been apprised of changes both to
documentation and operations from the previous audit?
*Commentary*: CPs change over time and each Participating PKI in the FPKI
has an obligation to remain in synch with the changing requirements of the
applicable FPKI CP (either FBCA or COMMON Policy) – has the participating
PKI’s CP and CPS been updated appropriately? If there have been other major
changes in operations, has a summary since the last year’s audit been
provided or discussed with the auditor?


This might be a model to further include/require within the overall audit
package. This would likely only make sense if also adding "Audit
Operational Findings" (which Illustrative Guidance for WebTrust now
includes guidance on, but which ETSI continues to refuse to add) and "Audit
MOA Findings" (for which "MOA" may be instead seen as "Mozilla Root
Certificate Policy" - i.e. the things above/beyond the BRs). We've already
seen WebTrust similarly developing reporting for "Architectural Overview",
and they've already updated reporting for "Assertion of Audit Scope", thus
showing in many ways, WebTrust already has the tools available to meet
these requirements. It would similarly be possible for ETSI-based audits to
meet these requirements, since the reports provided to browsers need not be
as limited as a Certification statement; they could include more holistic
reporting, in line with the eIDAS Conformity Assessment Reports.

>

Jeremy Rowley

unread,
Oct 15, 2019, 8:53:29 PM10/15/19
to ry...@sleevi.com, Wayne Thayer, mozilla-dev-security-policy
I like this approach. You could either add a page in the policy document or include the information in the management assertion letter (or auditor letter) that gives information about the auditor’s credentials and background. I also like the idea of summary on what the auditor followed up on from the previous year. This could be helpful to document where an auditor changed between years to see what they reviewed that another auditor noted or to see where the auditor had concerns from year to year. It can track where the CA may have a re-occurring issue instead of something that is a one-off concern.

From: Ryan Sleevi <ry...@sleevi.com>
Sent: Monday, October 14, 2019 4:12 PM
To: Ryan Sleevi <ry...@sleevi.com>
Cc: Jeremy Rowley <jeremy...@digicert.com>; Wayne Thayer <wth...@mozilla.com>; mozilla-dev-security-policy <mozilla-dev-s...@lists.mozilla.org>
Subject: Re: Mozilla Policy Requirements CA Incidents

In the spirit of improving transparency, I've gone and filed https://github.com/mozilla/pkipolicy/issues/192 , which is specific to auditors.

However, I want to highlight this model (the model used by the US Federal PKI), because it may also provide a roadmap for dealing with issues like this / those called by policy changes. Appendix C of those annual requirements for the US Federal PKI includes a number of useful requirements (really, all of them are in line with things we've discussed here), but two particularly relevant requirements are:

Guidance: Previous year findings Did the auditor review findings from previous year and ensure all findings were corrected as proposed during the previous audit?
Commentary: Often, the auditor sees an Audit Correction Action Plan, POA&M, or other evidence that the organization has recognized audit findings and intends to correct them, but the auditor is not necessarily engaged to assess the corrections at the time they are applied. The auditor should review that all proposed corrections have addressed the previous year’s findings.

Guidance: Changes Because the FPKI relies on a mapped CP and/or CPS for comparable operations, has the auditor been apprised of changes both to documentation and operations from the previous audit?
Commentary: CPs change over time and each Participating PKI in the FPKI has an obligation to remain in synch with the changing requirements of the applicable FPKI CP (either FBCA or COMMON Policy) – has the participating PKI’s CP and CPS been updated appropriately? If there have been other major changes in operations, has a summary since the last year’s audit been provided or discussed with the auditor?
0 new messages